00:00:00.001 Started by upstream project "autotest-per-patch" build number 120584 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.057 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.058 The recommended git tool is: git 00:00:00.058 using credential 00000000-0000-0000-0000-000000000002 00:00:00.060 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.088 Fetching changes from the remote Git repository 00:00:00.090 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.125 Using shallow fetch with depth 1 00:00:00.125 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.125 > git --version # timeout=10 00:00:00.151 > git --version # 'git version 2.39.2' 00:00:00.151 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.151 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.151 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.150 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.161 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.173 Checking out Revision a704ed4d86859cb8cbec080c78b138476da6ee34 (FETCH_HEAD) 00:00:06.173 > git config core.sparsecheckout # timeout=10 00:00:06.186 > git read-tree -mu HEAD # timeout=10 00:00:06.203 > git checkout -f a704ed4d86859cb8cbec080c78b138476da6ee34 # timeout=5 00:00:06.225 Commit message: "packer: Insert post-processors only if at least one is defined" 00:00:06.225 > git rev-list --no-walk a704ed4d86859cb8cbec080c78b138476da6ee34 # timeout=10 00:00:06.329 [Pipeline] Start of Pipeline 00:00:06.344 [Pipeline] library 00:00:06.346 Loading library shm_lib@master 00:00:06.346 Library shm_lib@master is cached. Copying from home. 00:00:06.362 [Pipeline] node 00:00:06.376 Running on GP11 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:06.377 [Pipeline] { 00:00:06.387 [Pipeline] catchError 00:00:06.388 [Pipeline] { 00:00:06.403 [Pipeline] wrap 00:00:06.414 [Pipeline] { 00:00:06.422 [Pipeline] stage 00:00:06.424 [Pipeline] { (Prologue) 00:00:06.620 [Pipeline] sh 00:00:06.903 + logger -p user.info -t JENKINS-CI 00:00:06.918 [Pipeline] echo 00:00:06.919 Node: GP11 00:00:06.925 [Pipeline] sh 00:00:07.213 [Pipeline] setCustomBuildProperty 00:00:07.225 [Pipeline] echo 00:00:07.227 Cleanup processes 00:00:07.232 [Pipeline] sh 00:00:07.511 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.511 1804101 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.526 [Pipeline] sh 00:00:07.813 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.813 ++ grep -v 'sudo pgrep' 00:00:07.813 ++ awk '{print $1}' 00:00:07.813 + sudo kill -9 00:00:07.813 + true 00:00:07.828 [Pipeline] cleanWs 00:00:07.837 [WS-CLEANUP] Deleting project workspace... 00:00:07.837 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.842 [WS-CLEANUP] done 00:00:07.850 [Pipeline] setCustomBuildProperty 00:00:07.866 [Pipeline] sh 00:00:08.145 + sudo git config --global --replace-all safe.directory '*' 00:00:08.228 [Pipeline] nodesByLabel 00:00:08.229 Found a total of 1 nodes with the 'sorcerer' label 00:00:08.242 [Pipeline] httpRequest 00:00:08.248 HttpMethod: GET 00:00:08.249 URL: http://10.211.164.101/packages/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:08.254 Sending request to url: http://10.211.164.101/packages/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:08.265 Response Code: HTTP/1.1 200 OK 00:00:08.266 Success: Status code 200 is in the accepted range: 200,404 00:00:08.266 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:10.428 [Pipeline] sh 00:00:10.713 + tar --no-same-owner -xf jbp_a704ed4d86859cb8cbec080c78b138476da6ee34.tar.gz 00:00:10.735 [Pipeline] httpRequest 00:00:10.739 HttpMethod: GET 00:00:10.740 URL: http://10.211.164.101/packages/spdk_bd2a0934ef9721fec29ce66b1441da5ac700b645.tar.gz 00:00:10.741 Sending request to url: http://10.211.164.101/packages/spdk_bd2a0934ef9721fec29ce66b1441da5ac700b645.tar.gz 00:00:10.754 Response Code: HTTP/1.1 200 OK 00:00:10.755 Success: Status code 200 is in the accepted range: 200,404 00:00:10.755 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_bd2a0934ef9721fec29ce66b1441da5ac700b645.tar.gz 00:00:32.545 [Pipeline] sh 00:00:32.831 + tar --no-same-owner -xf spdk_bd2a0934ef9721fec29ce66b1441da5ac700b645.tar.gz 00:00:35.435 [Pipeline] sh 00:00:35.720 + git -C spdk log --oneline -n5 00:00:35.720 bd2a0934e test/bdev_raid: make base bdev block size configurable 00:00:35.720 ce34c7fd8 raid: move blocklen_shift to r5f_info 00:00:35.720 d61131ae0 raid5f: interleaved md support 00:00:35.720 02f0918d1 ut/raid: allow testing interleaved md 00:00:35.720 e69c6fb44 ut/raid: refactor setting test params 00:00:35.733 [Pipeline] } 00:00:35.752 [Pipeline] // stage 00:00:35.762 [Pipeline] stage 00:00:35.764 [Pipeline] { (Prepare) 00:00:35.783 [Pipeline] writeFile 00:00:35.800 [Pipeline] sh 00:00:36.085 + logger -p user.info -t JENKINS-CI 00:00:36.098 [Pipeline] sh 00:00:36.383 + logger -p user.info -t JENKINS-CI 00:00:36.396 [Pipeline] sh 00:00:36.683 + cat autorun-spdk.conf 00:00:36.683 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.683 SPDK_TEST_NVMF=1 00:00:36.683 SPDK_TEST_NVME_CLI=1 00:00:36.683 SPDK_TEST_NVMF_NICS=mlx5 00:00:36.683 SPDK_RUN_UBSAN=1 00:00:36.683 NET_TYPE=phy 00:00:36.691 RUN_NIGHTLY=0 00:00:36.696 [Pipeline] readFile 00:00:36.721 [Pipeline] withEnv 00:00:36.723 [Pipeline] { 00:00:36.738 [Pipeline] sh 00:00:37.025 + set -ex 00:00:37.025 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:00:37.025 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:37.025 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.025 ++ SPDK_TEST_NVMF=1 00:00:37.025 ++ SPDK_TEST_NVME_CLI=1 00:00:37.025 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:37.025 ++ SPDK_RUN_UBSAN=1 00:00:37.025 ++ NET_TYPE=phy 00:00:37.025 ++ RUN_NIGHTLY=0 00:00:37.025 + case $SPDK_TEST_NVMF_NICS in 00:00:37.025 + DRIVERS=mlx5_ib 00:00:37.025 + [[ -n mlx5_ib ]] 00:00:37.025 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:37.025 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:37.025 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:37.025 rmmod: ERROR: Module irdma is not currently loaded 00:00:37.025 rmmod: ERROR: Module i40iw is not currently loaded 00:00:37.025 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:37.025 + true 00:00:37.025 + for D in $DRIVERS 00:00:37.025 + sudo modprobe mlx5_ib 00:00:37.025 + exit 0 00:00:37.036 [Pipeline] } 00:00:37.054 [Pipeline] // withEnv 00:00:37.060 [Pipeline] } 00:00:37.075 [Pipeline] // stage 00:00:37.085 [Pipeline] catchError 00:00:37.087 [Pipeline] { 00:00:37.102 [Pipeline] timeout 00:00:37.102 Timeout set to expire in 40 min 00:00:37.103 [Pipeline] { 00:00:37.121 [Pipeline] stage 00:00:37.123 [Pipeline] { (Tests) 00:00:37.139 [Pipeline] sh 00:00:37.425 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:00:37.425 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:00:37.425 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:00:37.425 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:00:37.425 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:37.425 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:00:37.425 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:00:37.425 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:37.425 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:00:37.425 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:37.425 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:00:37.425 + source /etc/os-release 00:00:37.425 ++ NAME='Fedora Linux' 00:00:37.425 ++ VERSION='38 (Cloud Edition)' 00:00:37.425 ++ ID=fedora 00:00:37.425 ++ VERSION_ID=38 00:00:37.425 ++ VERSION_CODENAME= 00:00:37.425 ++ PLATFORM_ID=platform:f38 00:00:37.425 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:37.425 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:37.425 ++ LOGO=fedora-logo-icon 00:00:37.425 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:37.425 ++ HOME_URL=https://fedoraproject.org/ 00:00:37.425 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:37.425 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:37.425 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:37.425 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:37.425 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:37.425 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:37.425 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:37.425 ++ SUPPORT_END=2024-05-14 00:00:37.425 ++ VARIANT='Cloud Edition' 00:00:37.425 ++ VARIANT_ID=cloud 00:00:37.425 + uname -a 00:00:37.425 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:37.425 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:00:38.364 Hugepages 00:00:38.364 node hugesize free / total 00:00:38.364 node0 1048576kB 0 / 0 00:00:38.364 node0 2048kB 0 / 0 00:00:38.364 node1 1048576kB 0 / 0 00:00:38.364 node1 2048kB 0 / 0 00:00:38.364 00:00:38.364 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:38.364 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:38.364 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:38.364 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:38.364 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:38.364 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:38.364 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:38.364 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:38.364 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:38.364 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:38.364 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:38.364 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:38.364 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:38.364 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:38.364 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:38.364 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:38.364 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:38.364 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:38.364 + rm -f /tmp/spdk-ld-path 00:00:38.364 + source autorun-spdk.conf 00:00:38.364 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.364 ++ SPDK_TEST_NVMF=1 00:00:38.364 ++ SPDK_TEST_NVME_CLI=1 00:00:38.364 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:38.364 ++ SPDK_RUN_UBSAN=1 00:00:38.364 ++ NET_TYPE=phy 00:00:38.364 ++ RUN_NIGHTLY=0 00:00:38.364 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:38.364 + [[ -n '' ]] 00:00:38.364 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:38.624 + for M in /var/spdk/build-*-manifest.txt 00:00:38.624 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:38.624 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:38.624 + for M in /var/spdk/build-*-manifest.txt 00:00:38.624 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:38.624 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:38.624 ++ uname 00:00:38.624 + [[ Linux == \L\i\n\u\x ]] 00:00:38.624 + sudo dmesg -T 00:00:38.624 + sudo dmesg --clear 00:00:38.624 + dmesg_pid=1804779 00:00:38.624 + [[ Fedora Linux == FreeBSD ]] 00:00:38.624 + sudo dmesg -Tw 00:00:38.624 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:38.624 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:38.624 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:38.624 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:38.624 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:38.624 + [[ -x /usr/src/fio-static/fio ]] 00:00:38.624 + export FIO_BIN=/usr/src/fio-static/fio 00:00:38.624 + FIO_BIN=/usr/src/fio-static/fio 00:00:38.624 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:38.624 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:38.624 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:38.624 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:38.624 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:38.624 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:38.624 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:38.624 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:38.624 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:38.624 Test configuration: 00:00:38.624 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.624 SPDK_TEST_NVMF=1 00:00:38.624 SPDK_TEST_NVME_CLI=1 00:00:38.624 SPDK_TEST_NVMF_NICS=mlx5 00:00:38.624 SPDK_RUN_UBSAN=1 00:00:38.624 NET_TYPE=phy 00:00:38.624 RUN_NIGHTLY=0 17:17:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:00:38.624 17:17:05 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:38.624 17:17:05 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:38.624 17:17:05 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:38.624 17:17:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.624 17:17:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.624 17:17:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.624 17:17:05 -- paths/export.sh@5 -- $ export PATH 00:00:38.624 17:17:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.624 17:17:05 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:00:38.624 17:17:05 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:38.624 17:17:05 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713453425.XXXXXX 00:00:38.624 17:17:05 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713453425.eK3Y7Z 00:00:38.624 17:17:05 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:38.624 17:17:05 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:38.624 17:17:05 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:00:38.624 17:17:05 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:38.624 17:17:05 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:38.624 17:17:05 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:38.624 17:17:05 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:38.624 17:17:05 -- common/autotest_common.sh@10 -- $ set +x 00:00:38.624 17:17:05 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:00:38.624 17:17:05 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:38.624 17:17:05 -- pm/common@17 -- $ local monitor 00:00:38.624 17:17:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.624 17:17:05 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1804813 00:00:38.624 17:17:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.624 17:17:05 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1804815 00:00:38.624 17:17:05 -- pm/common@21 -- $ date +%s 00:00:38.624 17:17:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.624 17:17:05 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1804818 00:00:38.624 17:17:05 -- pm/common@21 -- $ date +%s 00:00:38.625 17:17:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.625 17:17:05 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1804820 00:00:38.625 17:17:05 -- pm/common@26 -- $ sleep 1 00:00:38.625 17:17:05 -- pm/common@21 -- $ date +%s 00:00:38.625 17:17:05 -- pm/common@21 -- $ date +%s 00:00:38.625 17:17:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713453425 00:00:38.625 17:17:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713453425 00:00:38.625 17:17:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713453425 00:00:38.625 17:17:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713453425 00:00:38.625 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713453425_collect-vmstat.pm.log 00:00:38.625 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713453425_collect-bmc-pm.bmc.pm.log 00:00:38.625 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713453425_collect-cpu-load.pm.log 00:00:38.625 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713453425_collect-cpu-temp.pm.log 00:00:39.565 17:17:06 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:39.565 17:17:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:39.565 17:17:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:39.565 17:17:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:39.565 17:17:06 -- spdk/autobuild.sh@16 -- $ date -u 00:00:39.565 Thu Apr 18 03:17:06 PM UTC 2024 00:00:39.565 17:17:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:39.565 v24.05-pre-418-gbd2a0934e 00:00:39.565 17:17:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:39.565 17:17:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:39.565 17:17:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:39.565 17:17:06 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:39.565 17:17:06 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:39.565 17:17:06 -- common/autotest_common.sh@10 -- $ set +x 00:00:39.822 ************************************ 00:00:39.822 START TEST ubsan 00:00:39.822 ************************************ 00:00:39.822 17:17:06 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:39.822 using ubsan 00:00:39.822 00:00:39.822 real 0m0.000s 00:00:39.822 user 0m0.000s 00:00:39.822 sys 0m0.000s 00:00:39.822 17:17:06 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:39.822 17:17:06 -- common/autotest_common.sh@10 -- $ set +x 00:00:39.822 ************************************ 00:00:39.822 END TEST ubsan 00:00:39.822 ************************************ 00:00:39.822 17:17:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:39.822 17:17:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:39.822 17:17:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:39.822 17:17:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:39.822 17:17:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:39.822 17:17:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:39.822 17:17:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:39.822 17:17:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:39.822 17:17:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:39.822 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:00:39.822 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:00:40.081 Using 'verbs' RDMA provider 00:00:50.641 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:00.642 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:00.642 Creating mk/config.mk...done. 00:01:00.642 Creating mk/cc.flags.mk...done. 00:01:00.642 Type 'make' to build. 00:01:00.642 17:17:26 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:00.642 17:17:26 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:00.642 17:17:26 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:00.642 17:17:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.642 ************************************ 00:01:00.642 START TEST make 00:01:00.642 ************************************ 00:01:00.642 17:17:26 -- common/autotest_common.sh@1111 -- $ make -j48 00:01:00.642 make[1]: Nothing to be done for 'all'. 00:01:08.787 The Meson build system 00:01:08.787 Version: 1.3.1 00:01:08.787 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:08.787 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:08.787 Build type: native build 00:01:08.787 Program cat found: YES (/usr/bin/cat) 00:01:08.787 Project name: DPDK 00:01:08.787 Project version: 23.11.0 00:01:08.787 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:08.787 C linker for the host machine: cc ld.bfd 2.39-16 00:01:08.787 Host machine cpu family: x86_64 00:01:08.787 Host machine cpu: x86_64 00:01:08.787 Message: ## Building in Developer Mode ## 00:01:08.787 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:08.787 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:08.787 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:08.787 Program python3 found: YES (/usr/bin/python3) 00:01:08.787 Program cat found: YES (/usr/bin/cat) 00:01:08.787 Compiler for C supports arguments -march=native: YES 00:01:08.787 Checking for size of "void *" : 8 00:01:08.787 Checking for size of "void *" : 8 (cached) 00:01:08.787 Library m found: YES 00:01:08.787 Library numa found: YES 00:01:08.787 Has header "numaif.h" : YES 00:01:08.787 Library fdt found: NO 00:01:08.787 Library execinfo found: NO 00:01:08.787 Has header "execinfo.h" : YES 00:01:08.787 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:08.787 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:08.787 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:08.787 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:08.787 Run-time dependency openssl found: YES 3.0.9 00:01:08.787 Run-time dependency libpcap found: YES 1.10.4 00:01:08.787 Has header "pcap.h" with dependency libpcap: YES 00:01:08.787 Compiler for C supports arguments -Wcast-qual: YES 00:01:08.787 Compiler for C supports arguments -Wdeprecated: YES 00:01:08.787 Compiler for C supports arguments -Wformat: YES 00:01:08.787 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:08.787 Compiler for C supports arguments -Wformat-security: NO 00:01:08.787 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:08.787 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:08.787 Compiler for C supports arguments -Wnested-externs: YES 00:01:08.787 Compiler for C supports arguments -Wold-style-definition: YES 00:01:08.787 Compiler for C supports arguments -Wpointer-arith: YES 00:01:08.787 Compiler for C supports arguments -Wsign-compare: YES 00:01:08.787 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:08.787 Compiler for C supports arguments -Wundef: YES 00:01:08.787 Compiler for C supports arguments -Wwrite-strings: YES 00:01:08.787 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:08.787 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:08.787 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:08.787 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:08.787 Program objdump found: YES (/usr/bin/objdump) 00:01:08.787 Compiler for C supports arguments -mavx512f: YES 00:01:08.787 Checking if "AVX512 checking" compiles: YES 00:01:08.787 Fetching value of define "__SSE4_2__" : 1 00:01:08.787 Fetching value of define "__AES__" : 1 00:01:08.787 Fetching value of define "__AVX__" : 1 00:01:08.787 Fetching value of define "__AVX2__" : (undefined) 00:01:08.787 Fetching value of define "__AVX512BW__" : (undefined) 00:01:08.787 Fetching value of define "__AVX512CD__" : (undefined) 00:01:08.787 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:08.787 Fetching value of define "__AVX512F__" : (undefined) 00:01:08.787 Fetching value of define "__AVX512VL__" : (undefined) 00:01:08.787 Fetching value of define "__PCLMUL__" : 1 00:01:08.787 Fetching value of define "__RDRND__" : 1 00:01:08.787 Fetching value of define "__RDSEED__" : (undefined) 00:01:08.787 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:08.787 Fetching value of define "__znver1__" : (undefined) 00:01:08.787 Fetching value of define "__znver2__" : (undefined) 00:01:08.787 Fetching value of define "__znver3__" : (undefined) 00:01:08.787 Fetching value of define "__znver4__" : (undefined) 00:01:08.787 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:08.787 Message: lib/log: Defining dependency "log" 00:01:08.787 Message: lib/kvargs: Defining dependency "kvargs" 00:01:08.787 Message: lib/telemetry: Defining dependency "telemetry" 00:01:08.787 Checking for function "getentropy" : NO 00:01:08.787 Message: lib/eal: Defining dependency "eal" 00:01:08.787 Message: lib/ring: Defining dependency "ring" 00:01:08.787 Message: lib/rcu: Defining dependency "rcu" 00:01:08.787 Message: lib/mempool: Defining dependency "mempool" 00:01:08.787 Message: lib/mbuf: Defining dependency "mbuf" 00:01:08.787 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:08.787 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:08.787 Compiler for C supports arguments -mpclmul: YES 00:01:08.787 Compiler for C supports arguments -maes: YES 00:01:08.787 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:08.787 Compiler for C supports arguments -mavx512bw: YES 00:01:08.787 Compiler for C supports arguments -mavx512dq: YES 00:01:08.787 Compiler for C supports arguments -mavx512vl: YES 00:01:08.787 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:08.787 Compiler for C supports arguments -mavx2: YES 00:01:08.787 Compiler for C supports arguments -mavx: YES 00:01:08.787 Message: lib/net: Defining dependency "net" 00:01:08.787 Message: lib/meter: Defining dependency "meter" 00:01:08.787 Message: lib/ethdev: Defining dependency "ethdev" 00:01:08.787 Message: lib/pci: Defining dependency "pci" 00:01:08.787 Message: lib/cmdline: Defining dependency "cmdline" 00:01:08.787 Message: lib/hash: Defining dependency "hash" 00:01:08.787 Message: lib/timer: Defining dependency "timer" 00:01:08.787 Message: lib/compressdev: Defining dependency "compressdev" 00:01:08.787 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:08.787 Message: lib/dmadev: Defining dependency "dmadev" 00:01:08.787 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:08.787 Message: lib/power: Defining dependency "power" 00:01:08.787 Message: lib/reorder: Defining dependency "reorder" 00:01:08.787 Message: lib/security: Defining dependency "security" 00:01:08.787 Has header "linux/userfaultfd.h" : YES 00:01:08.787 Has header "linux/vduse.h" : YES 00:01:08.787 Message: lib/vhost: Defining dependency "vhost" 00:01:08.787 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:08.787 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:08.787 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:08.787 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:08.787 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:08.787 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:08.787 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:08.787 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:08.787 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:08.787 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:08.787 Program doxygen found: YES (/usr/bin/doxygen) 00:01:08.787 Configuring doxy-api-html.conf using configuration 00:01:08.787 Configuring doxy-api-man.conf using configuration 00:01:08.787 Program mandb found: YES (/usr/bin/mandb) 00:01:08.787 Program sphinx-build found: NO 00:01:08.787 Configuring rte_build_config.h using configuration 00:01:08.787 Message: 00:01:08.788 ================= 00:01:08.788 Applications Enabled 00:01:08.788 ================= 00:01:08.788 00:01:08.788 apps: 00:01:08.788 00:01:08.788 00:01:08.788 Message: 00:01:08.788 ================= 00:01:08.788 Libraries Enabled 00:01:08.788 ================= 00:01:08.788 00:01:08.788 libs: 00:01:08.788 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:08.788 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:08.788 cryptodev, dmadev, power, reorder, security, vhost, 00:01:08.788 00:01:08.788 Message: 00:01:08.788 =============== 00:01:08.788 Drivers Enabled 00:01:08.788 =============== 00:01:08.788 00:01:08.788 common: 00:01:08.788 00:01:08.788 bus: 00:01:08.788 pci, vdev, 00:01:08.788 mempool: 00:01:08.788 ring, 00:01:08.788 dma: 00:01:08.788 00:01:08.788 net: 00:01:08.788 00:01:08.788 crypto: 00:01:08.788 00:01:08.788 compress: 00:01:08.788 00:01:08.788 vdpa: 00:01:08.788 00:01:08.788 00:01:08.788 Message: 00:01:08.788 ================= 00:01:08.788 Content Skipped 00:01:08.788 ================= 00:01:08.788 00:01:08.788 apps: 00:01:08.788 dumpcap: explicitly disabled via build config 00:01:08.788 graph: explicitly disabled via build config 00:01:08.788 pdump: explicitly disabled via build config 00:01:08.788 proc-info: explicitly disabled via build config 00:01:08.788 test-acl: explicitly disabled via build config 00:01:08.788 test-bbdev: explicitly disabled via build config 00:01:08.788 test-cmdline: explicitly disabled via build config 00:01:08.788 test-compress-perf: explicitly disabled via build config 00:01:08.788 test-crypto-perf: explicitly disabled via build config 00:01:08.788 test-dma-perf: explicitly disabled via build config 00:01:08.788 test-eventdev: explicitly disabled via build config 00:01:08.788 test-fib: explicitly disabled via build config 00:01:08.788 test-flow-perf: explicitly disabled via build config 00:01:08.788 test-gpudev: explicitly disabled via build config 00:01:08.788 test-mldev: explicitly disabled via build config 00:01:08.788 test-pipeline: explicitly disabled via build config 00:01:08.788 test-pmd: explicitly disabled via build config 00:01:08.788 test-regex: explicitly disabled via build config 00:01:08.788 test-sad: explicitly disabled via build config 00:01:08.788 test-security-perf: explicitly disabled via build config 00:01:08.788 00:01:08.788 libs: 00:01:08.788 metrics: explicitly disabled via build config 00:01:08.788 acl: explicitly disabled via build config 00:01:08.788 bbdev: explicitly disabled via build config 00:01:08.788 bitratestats: explicitly disabled via build config 00:01:08.788 bpf: explicitly disabled via build config 00:01:08.788 cfgfile: explicitly disabled via build config 00:01:08.788 distributor: explicitly disabled via build config 00:01:08.788 efd: explicitly disabled via build config 00:01:08.788 eventdev: explicitly disabled via build config 00:01:08.788 dispatcher: explicitly disabled via build config 00:01:08.788 gpudev: explicitly disabled via build config 00:01:08.788 gro: explicitly disabled via build config 00:01:08.788 gso: explicitly disabled via build config 00:01:08.788 ip_frag: explicitly disabled via build config 00:01:08.788 jobstats: explicitly disabled via build config 00:01:08.788 latencystats: explicitly disabled via build config 00:01:08.788 lpm: explicitly disabled via build config 00:01:08.788 member: explicitly disabled via build config 00:01:08.788 pcapng: explicitly disabled via build config 00:01:08.788 rawdev: explicitly disabled via build config 00:01:08.788 regexdev: explicitly disabled via build config 00:01:08.788 mldev: explicitly disabled via build config 00:01:08.788 rib: explicitly disabled via build config 00:01:08.788 sched: explicitly disabled via build config 00:01:08.788 stack: explicitly disabled via build config 00:01:08.788 ipsec: explicitly disabled via build config 00:01:08.788 pdcp: explicitly disabled via build config 00:01:08.788 fib: explicitly disabled via build config 00:01:08.788 port: explicitly disabled via build config 00:01:08.788 pdump: explicitly disabled via build config 00:01:08.788 table: explicitly disabled via build config 00:01:08.788 pipeline: explicitly disabled via build config 00:01:08.788 graph: explicitly disabled via build config 00:01:08.788 node: explicitly disabled via build config 00:01:08.788 00:01:08.788 drivers: 00:01:08.788 common/cpt: not in enabled drivers build config 00:01:08.788 common/dpaax: not in enabled drivers build config 00:01:08.788 common/iavf: not in enabled drivers build config 00:01:08.788 common/idpf: not in enabled drivers build config 00:01:08.788 common/mvep: not in enabled drivers build config 00:01:08.788 common/octeontx: not in enabled drivers build config 00:01:08.788 bus/auxiliary: not in enabled drivers build config 00:01:08.788 bus/cdx: not in enabled drivers build config 00:01:08.788 bus/dpaa: not in enabled drivers build config 00:01:08.788 bus/fslmc: not in enabled drivers build config 00:01:08.788 bus/ifpga: not in enabled drivers build config 00:01:08.788 bus/platform: not in enabled drivers build config 00:01:08.788 bus/vmbus: not in enabled drivers build config 00:01:08.788 common/cnxk: not in enabled drivers build config 00:01:08.788 common/mlx5: not in enabled drivers build config 00:01:08.788 common/nfp: not in enabled drivers build config 00:01:08.788 common/qat: not in enabled drivers build config 00:01:08.788 common/sfc_efx: not in enabled drivers build config 00:01:08.788 mempool/bucket: not in enabled drivers build config 00:01:08.788 mempool/cnxk: not in enabled drivers build config 00:01:08.788 mempool/dpaa: not in enabled drivers build config 00:01:08.788 mempool/dpaa2: not in enabled drivers build config 00:01:08.788 mempool/octeontx: not in enabled drivers build config 00:01:08.788 mempool/stack: not in enabled drivers build config 00:01:08.788 dma/cnxk: not in enabled drivers build config 00:01:08.788 dma/dpaa: not in enabled drivers build config 00:01:08.788 dma/dpaa2: not in enabled drivers build config 00:01:08.788 dma/hisilicon: not in enabled drivers build config 00:01:08.788 dma/idxd: not in enabled drivers build config 00:01:08.788 dma/ioat: not in enabled drivers build config 00:01:08.788 dma/skeleton: not in enabled drivers build config 00:01:08.788 net/af_packet: not in enabled drivers build config 00:01:08.788 net/af_xdp: not in enabled drivers build config 00:01:08.788 net/ark: not in enabled drivers build config 00:01:08.788 net/atlantic: not in enabled drivers build config 00:01:08.788 net/avp: not in enabled drivers build config 00:01:08.788 net/axgbe: not in enabled drivers build config 00:01:08.788 net/bnx2x: not in enabled drivers build config 00:01:08.788 net/bnxt: not in enabled drivers build config 00:01:08.788 net/bonding: not in enabled drivers build config 00:01:08.788 net/cnxk: not in enabled drivers build config 00:01:08.788 net/cpfl: not in enabled drivers build config 00:01:08.788 net/cxgbe: not in enabled drivers build config 00:01:08.788 net/dpaa: not in enabled drivers build config 00:01:08.788 net/dpaa2: not in enabled drivers build config 00:01:08.788 net/e1000: not in enabled drivers build config 00:01:08.788 net/ena: not in enabled drivers build config 00:01:08.788 net/enetc: not in enabled drivers build config 00:01:08.788 net/enetfec: not in enabled drivers build config 00:01:08.788 net/enic: not in enabled drivers build config 00:01:08.788 net/failsafe: not in enabled drivers build config 00:01:08.788 net/fm10k: not in enabled drivers build config 00:01:08.788 net/gve: not in enabled drivers build config 00:01:08.788 net/hinic: not in enabled drivers build config 00:01:08.788 net/hns3: not in enabled drivers build config 00:01:08.788 net/i40e: not in enabled drivers build config 00:01:08.788 net/iavf: not in enabled drivers build config 00:01:08.788 net/ice: not in enabled drivers build config 00:01:08.788 net/idpf: not in enabled drivers build config 00:01:08.788 net/igc: not in enabled drivers build config 00:01:08.788 net/ionic: not in enabled drivers build config 00:01:08.788 net/ipn3ke: not in enabled drivers build config 00:01:08.788 net/ixgbe: not in enabled drivers build config 00:01:08.788 net/mana: not in enabled drivers build config 00:01:08.788 net/memif: not in enabled drivers build config 00:01:08.788 net/mlx4: not in enabled drivers build config 00:01:08.788 net/mlx5: not in enabled drivers build config 00:01:08.788 net/mvneta: not in enabled drivers build config 00:01:08.788 net/mvpp2: not in enabled drivers build config 00:01:08.788 net/netvsc: not in enabled drivers build config 00:01:08.788 net/nfb: not in enabled drivers build config 00:01:08.788 net/nfp: not in enabled drivers build config 00:01:08.788 net/ngbe: not in enabled drivers build config 00:01:08.788 net/null: not in enabled drivers build config 00:01:08.788 net/octeontx: not in enabled drivers build config 00:01:08.788 net/octeon_ep: not in enabled drivers build config 00:01:08.788 net/pcap: not in enabled drivers build config 00:01:08.788 net/pfe: not in enabled drivers build config 00:01:08.788 net/qede: not in enabled drivers build config 00:01:08.788 net/ring: not in enabled drivers build config 00:01:08.788 net/sfc: not in enabled drivers build config 00:01:08.788 net/softnic: not in enabled drivers build config 00:01:08.788 net/tap: not in enabled drivers build config 00:01:08.788 net/thunderx: not in enabled drivers build config 00:01:08.788 net/txgbe: not in enabled drivers build config 00:01:08.788 net/vdev_netvsc: not in enabled drivers build config 00:01:08.788 net/vhost: not in enabled drivers build config 00:01:08.788 net/virtio: not in enabled drivers build config 00:01:08.788 net/vmxnet3: not in enabled drivers build config 00:01:08.789 raw/*: missing internal dependency, "rawdev" 00:01:08.789 crypto/armv8: not in enabled drivers build config 00:01:08.789 crypto/bcmfs: not in enabled drivers build config 00:01:08.789 crypto/caam_jr: not in enabled drivers build config 00:01:08.789 crypto/ccp: not in enabled drivers build config 00:01:08.789 crypto/cnxk: not in enabled drivers build config 00:01:08.789 crypto/dpaa_sec: not in enabled drivers build config 00:01:08.789 crypto/dpaa2_sec: not in enabled drivers build config 00:01:08.789 crypto/ipsec_mb: not in enabled drivers build config 00:01:08.789 crypto/mlx5: not in enabled drivers build config 00:01:08.789 crypto/mvsam: not in enabled drivers build config 00:01:08.789 crypto/nitrox: not in enabled drivers build config 00:01:08.789 crypto/null: not in enabled drivers build config 00:01:08.789 crypto/octeontx: not in enabled drivers build config 00:01:08.789 crypto/openssl: not in enabled drivers build config 00:01:08.789 crypto/scheduler: not in enabled drivers build config 00:01:08.789 crypto/uadk: not in enabled drivers build config 00:01:08.789 crypto/virtio: not in enabled drivers build config 00:01:08.789 compress/isal: not in enabled drivers build config 00:01:08.789 compress/mlx5: not in enabled drivers build config 00:01:08.789 compress/octeontx: not in enabled drivers build config 00:01:08.789 compress/zlib: not in enabled drivers build config 00:01:08.789 regex/*: missing internal dependency, "regexdev" 00:01:08.789 ml/*: missing internal dependency, "mldev" 00:01:08.789 vdpa/ifc: not in enabled drivers build config 00:01:08.789 vdpa/mlx5: not in enabled drivers build config 00:01:08.789 vdpa/nfp: not in enabled drivers build config 00:01:08.789 vdpa/sfc: not in enabled drivers build config 00:01:08.789 event/*: missing internal dependency, "eventdev" 00:01:08.789 baseband/*: missing internal dependency, "bbdev" 00:01:08.789 gpu/*: missing internal dependency, "gpudev" 00:01:08.789 00:01:08.789 00:01:08.789 Build targets in project: 85 00:01:08.789 00:01:08.789 DPDK 23.11.0 00:01:08.789 00:01:08.789 User defined options 00:01:08.789 buildtype : debug 00:01:08.789 default_library : shared 00:01:08.789 libdir : lib 00:01:08.789 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:08.789 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:08.789 c_link_args : 00:01:08.789 cpu_instruction_set: native 00:01:08.789 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:08.789 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:08.789 enable_docs : false 00:01:08.789 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:08.789 enable_kmods : false 00:01:08.789 tests : false 00:01:08.789 00:01:08.789 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:09.052 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:09.052 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:09.052 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:09.052 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:09.052 [4/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:09.052 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:09.314 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:09.314 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:09.314 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:09.314 [9/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:09.314 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:09.314 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:09.314 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:09.314 [13/265] Linking static target lib/librte_kvargs.a 00:01:09.314 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:09.314 [15/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:09.314 [16/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:09.314 [17/265] Linking static target lib/librte_log.a 00:01:09.314 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:09.314 [19/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:09.314 [20/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:09.576 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:09.837 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.101 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:10.101 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:10.101 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:10.101 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:10.101 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:10.101 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:10.101 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:10.101 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:10.101 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:10.101 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:10.101 [33/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:10.101 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:10.101 [35/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:10.101 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:10.101 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:10.101 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:10.101 [39/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:10.101 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:10.101 [41/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:10.101 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:10.101 [43/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:10.101 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:10.101 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:10.101 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:10.101 [47/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:10.101 [48/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:10.101 [49/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:10.101 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:10.101 [51/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:10.101 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:10.101 [53/265] Linking static target lib/librte_telemetry.a 00:01:10.101 [54/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:10.101 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:10.101 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:10.101 [57/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:10.101 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:10.101 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:10.101 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:10.101 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:10.365 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:10.365 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:10.365 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:10.365 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:10.365 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:10.365 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:10.365 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:10.365 [69/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:10.365 [70/265] Linking static target lib/librte_pci.a 00:01:10.365 [71/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.365 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:10.365 [73/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:10.365 [74/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:10.634 [75/265] Linking target lib/librte_log.so.24.0 00:01:10.634 [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:10.634 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:10.634 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:10.634 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:10.634 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:10.634 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:10.634 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:10.634 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:10.634 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:10.634 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:10.634 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:10.634 [87/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:10.899 [88/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:10.899 [89/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.899 [90/265] Linking target lib/librte_kvargs.so.24.0 00:01:10.899 [91/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:10.899 [92/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:10.899 [93/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:10.899 [94/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:10.899 [95/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:10.899 [96/265] Linking static target lib/librte_ring.a 00:01:10.899 [97/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:11.160 [98/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:11.160 [99/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:11.160 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:11.160 [101/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:11.160 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:11.160 [103/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:11.160 [104/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:11.160 [105/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:11.160 [106/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:11.160 [107/265] Linking static target lib/librte_meter.a 00:01:11.160 [108/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:11.160 [109/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:11.160 [110/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:11.160 [111/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.160 [112/265] Linking static target lib/librte_eal.a 00:01:11.160 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:11.160 [114/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:11.160 [115/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:11.160 [116/265] Linking static target lib/librte_mempool.a 00:01:11.160 [117/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:11.160 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:11.160 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:11.421 [120/265] Linking target lib/librte_telemetry.so.24.0 00:01:11.421 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:11.421 [122/265] Linking static target lib/librte_rcu.a 00:01:11.421 [123/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:11.421 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:11.421 [125/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:11.421 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:11.421 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:11.421 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:11.421 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:11.421 [130/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:11.421 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:11.421 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:11.421 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:11.421 [134/265] Linking static target lib/librte_cmdline.a 00:01:11.687 [135/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.687 [136/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:11.687 [137/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:11.687 [138/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:11.687 [139/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.687 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:11.687 [141/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:11.687 [142/265] Linking static target lib/librte_net.a 00:01:11.687 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:11.687 [144/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:11.687 [145/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:11.687 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:11.687 [147/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:11.947 [148/265] Linking static target lib/librte_timer.a 00:01:11.947 [149/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:11.947 [150/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.947 [151/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:11.947 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:11.947 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:12.206 [154/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.206 [155/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:12.206 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:12.206 [157/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:12.206 [158/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:12.206 [159/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:12.206 [160/265] Linking static target lib/librte_dmadev.a 00:01:12.206 [161/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:12.206 [162/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:12.206 [163/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.206 [164/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.206 [165/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:12.206 [166/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:12.464 [167/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:12.464 [168/265] Linking static target lib/librte_hash.a 00:01:12.464 [169/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:12.464 [170/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:12.464 [171/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:12.464 [172/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:12.464 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:12.464 [174/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:12.464 [175/265] Linking static target lib/librte_power.a 00:01:12.464 [176/265] Linking static target lib/librte_compressdev.a 00:01:12.464 [177/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:12.464 [178/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:12.464 [179/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.464 [180/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:12.464 [181/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.464 [182/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:12.464 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:12.464 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:12.464 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:12.464 [186/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:12.464 [187/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:12.723 [188/265] Linking static target lib/librte_mbuf.a 00:01:12.723 [189/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:12.723 [190/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:12.723 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:12.723 [192/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:12.723 [193/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:12.723 [194/265] Linking static target lib/librte_reorder.a 00:01:12.723 [195/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:12.723 [196/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:12.723 [197/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:12.723 [198/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:12.723 [199/265] Linking static target drivers/librte_bus_vdev.a 00:01:12.723 [200/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:12.723 [201/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.981 [202/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:12.981 [203/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:12.981 [204/265] Linking static target drivers/librte_bus_pci.a 00:01:12.981 [205/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.981 [206/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:12.981 [207/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:12.981 [208/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:12.981 [209/265] Linking static target drivers/librte_mempool_ring.a 00:01:12.981 [210/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.981 [211/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.981 [212/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.981 [213/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:12.981 [214/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:12.981 [215/265] Linking static target lib/librte_security.a 00:01:12.981 [216/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:12.981 [217/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.240 [218/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.240 [219/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:13.240 [220/265] Linking static target lib/librte_cryptodev.a 00:01:13.240 [221/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.499 [222/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:13.499 [223/265] Linking static target lib/librte_ethdev.a 00:01:14.433 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.807 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:17.180 [226/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.438 [227/265] Linking target lib/librte_eal.so.24.0 00:01:17.438 [228/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.438 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:17.438 [230/265] Linking target lib/librte_meter.so.24.0 00:01:17.438 [231/265] Linking target lib/librte_pci.so.24.0 00:01:17.438 [232/265] Linking target lib/librte_ring.so.24.0 00:01:17.438 [233/265] Linking target lib/librte_timer.so.24.0 00:01:17.438 [234/265] Linking target lib/librte_dmadev.so.24.0 00:01:17.438 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:17.695 [236/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:17.695 [237/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:17.695 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:17.695 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:17.695 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:17.695 [241/265] Linking target lib/librte_rcu.so.24.0 00:01:17.695 [242/265] Linking target lib/librte_mempool.so.24.0 00:01:17.695 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:17.953 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:17.953 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:17.953 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:17.953 [247/265] Linking target lib/librte_mbuf.so.24.0 00:01:17.953 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:17.953 [249/265] Linking target lib/librte_reorder.so.24.0 00:01:17.953 [250/265] Linking target lib/librte_compressdev.so.24.0 00:01:17.953 [251/265] Linking target lib/librte_net.so.24.0 00:01:17.953 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:18.211 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:18.211 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:18.211 [255/265] Linking target lib/librte_security.so.24.0 00:01:18.211 [256/265] Linking target lib/librte_hash.so.24.0 00:01:18.211 [257/265] Linking target lib/librte_cmdline.so.24.0 00:01:18.211 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:18.211 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:18.470 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:18.470 [261/265] Linking target lib/librte_power.so.24.0 00:01:20.999 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:21.257 [263/265] Linking static target lib/librte_vhost.a 00:01:22.193 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.193 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:22.193 INFO: autodetecting backend as ninja 00:01:22.193 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:23.166 CC lib/ut/ut.o 00:01:23.167 CC lib/ut_mock/mock.o 00:01:23.167 CC lib/log/log.o 00:01:23.167 CC lib/log/log_flags.o 00:01:23.167 CC lib/log/log_deprecated.o 00:01:23.425 LIB libspdk_ut_mock.a 00:01:23.425 SO libspdk_ut_mock.so.6.0 00:01:23.425 LIB libspdk_ut.a 00:01:23.425 LIB libspdk_log.a 00:01:23.425 SO libspdk_ut.so.2.0 00:01:23.425 SO libspdk_log.so.7.0 00:01:23.425 SYMLINK libspdk_ut_mock.so 00:01:23.425 SYMLINK libspdk_ut.so 00:01:23.425 SYMLINK libspdk_log.so 00:01:23.684 CC lib/dma/dma.o 00:01:23.684 CXX lib/trace_parser/trace.o 00:01:23.684 CC lib/util/base64.o 00:01:23.684 CC lib/util/bit_array.o 00:01:23.684 CC lib/util/cpuset.o 00:01:23.684 CC lib/util/crc16.o 00:01:23.684 CC lib/util/crc32.o 00:01:23.684 CC lib/util/crc32c.o 00:01:23.684 CC lib/ioat/ioat.o 00:01:23.684 CC lib/util/crc32_ieee.o 00:01:23.684 CC lib/util/crc64.o 00:01:23.684 CC lib/util/dif.o 00:01:23.684 CC lib/util/fd.o 00:01:23.684 CC lib/util/file.o 00:01:23.684 CC lib/util/hexlify.o 00:01:23.684 CC lib/util/iov.o 00:01:23.684 CC lib/util/math.o 00:01:23.684 CC lib/util/pipe.o 00:01:23.684 CC lib/util/strerror_tls.o 00:01:23.684 CC lib/util/string.o 00:01:23.684 CC lib/util/uuid.o 00:01:23.684 CC lib/util/fd_group.o 00:01:23.684 CC lib/util/xor.o 00:01:23.684 CC lib/util/zipf.o 00:01:23.684 CC lib/vfio_user/host/vfio_user_pci.o 00:01:23.684 CC lib/vfio_user/host/vfio_user.o 00:01:23.942 LIB libspdk_dma.a 00:01:23.942 SO libspdk_dma.so.4.0 00:01:23.942 SYMLINK libspdk_dma.so 00:01:23.942 LIB libspdk_ioat.a 00:01:23.942 SO libspdk_ioat.so.7.0 00:01:23.942 SYMLINK libspdk_ioat.so 00:01:23.942 LIB libspdk_vfio_user.a 00:01:24.200 SO libspdk_vfio_user.so.5.0 00:01:24.200 SYMLINK libspdk_vfio_user.so 00:01:24.200 LIB libspdk_util.a 00:01:24.459 SO libspdk_util.so.9.0 00:01:24.459 SYMLINK libspdk_util.so 00:01:24.717 LIB libspdk_trace_parser.a 00:01:24.717 CC lib/conf/conf.o 00:01:24.717 CC lib/idxd/idxd.o 00:01:24.717 CC lib/idxd/idxd_user.o 00:01:24.717 CC lib/rdma/common.o 00:01:24.717 CC lib/rdma/rdma_verbs.o 00:01:24.717 SO libspdk_trace_parser.so.5.0 00:01:24.717 CC lib/json/json_parse.o 00:01:24.717 CC lib/vmd/vmd.o 00:01:24.717 CC lib/json/json_util.o 00:01:24.717 CC lib/vmd/led.o 00:01:24.717 CC lib/env_dpdk/env.o 00:01:24.717 CC lib/json/json_write.o 00:01:24.717 CC lib/env_dpdk/memory.o 00:01:24.717 CC lib/env_dpdk/pci.o 00:01:24.717 CC lib/env_dpdk/init.o 00:01:24.717 CC lib/env_dpdk/threads.o 00:01:24.717 CC lib/env_dpdk/pci_ioat.o 00:01:24.717 CC lib/env_dpdk/pci_virtio.o 00:01:24.717 CC lib/env_dpdk/pci_vmd.o 00:01:24.717 CC lib/env_dpdk/pci_idxd.o 00:01:24.717 CC lib/env_dpdk/pci_event.o 00:01:24.717 CC lib/env_dpdk/sigbus_handler.o 00:01:24.717 CC lib/env_dpdk/pci_dpdk.o 00:01:24.717 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:24.717 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:24.717 SYMLINK libspdk_trace_parser.so 00:01:24.975 LIB libspdk_conf.a 00:01:24.975 SO libspdk_conf.so.6.0 00:01:24.975 LIB libspdk_json.a 00:01:24.975 LIB libspdk_rdma.a 00:01:24.975 SYMLINK libspdk_conf.so 00:01:24.975 SO libspdk_rdma.so.6.0 00:01:24.975 SO libspdk_json.so.6.0 00:01:24.975 SYMLINK libspdk_rdma.so 00:01:24.975 SYMLINK libspdk_json.so 00:01:25.233 CC lib/jsonrpc/jsonrpc_server.o 00:01:25.233 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:25.233 CC lib/jsonrpc/jsonrpc_client.o 00:01:25.233 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:25.233 LIB libspdk_idxd.a 00:01:25.233 SO libspdk_idxd.so.12.0 00:01:25.233 SYMLINK libspdk_idxd.so 00:01:25.233 LIB libspdk_vmd.a 00:01:25.233 SO libspdk_vmd.so.6.0 00:01:25.492 SYMLINK libspdk_vmd.so 00:01:25.492 LIB libspdk_jsonrpc.a 00:01:25.492 SO libspdk_jsonrpc.so.6.0 00:01:25.492 SYMLINK libspdk_jsonrpc.so 00:01:25.750 CC lib/rpc/rpc.o 00:01:26.008 LIB libspdk_rpc.a 00:01:26.008 SO libspdk_rpc.so.6.0 00:01:26.008 SYMLINK libspdk_rpc.so 00:01:26.266 CC lib/trace/trace.o 00:01:26.266 CC lib/keyring/keyring.o 00:01:26.266 CC lib/notify/notify.o 00:01:26.266 CC lib/trace/trace_flags.o 00:01:26.266 CC lib/notify/notify_rpc.o 00:01:26.266 CC lib/keyring/keyring_rpc.o 00:01:26.266 CC lib/trace/trace_rpc.o 00:01:26.266 LIB libspdk_notify.a 00:01:26.266 SO libspdk_notify.so.6.0 00:01:26.524 LIB libspdk_keyring.a 00:01:26.524 LIB libspdk_trace.a 00:01:26.524 SYMLINK libspdk_notify.so 00:01:26.524 SO libspdk_keyring.so.1.0 00:01:26.524 SO libspdk_trace.so.10.0 00:01:26.524 SYMLINK libspdk_keyring.so 00:01:26.524 SYMLINK libspdk_trace.so 00:01:26.798 LIB libspdk_env_dpdk.a 00:01:26.798 CC lib/thread/thread.o 00:01:26.798 CC lib/thread/iobuf.o 00:01:26.798 CC lib/sock/sock.o 00:01:26.798 CC lib/sock/sock_rpc.o 00:01:26.798 SO libspdk_env_dpdk.so.14.0 00:01:26.798 SYMLINK libspdk_env_dpdk.so 00:01:27.070 LIB libspdk_sock.a 00:01:27.070 SO libspdk_sock.so.9.0 00:01:27.070 SYMLINK libspdk_sock.so 00:01:27.329 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:27.329 CC lib/nvme/nvme_ctrlr.o 00:01:27.329 CC lib/nvme/nvme_fabric.o 00:01:27.329 CC lib/nvme/nvme_ns_cmd.o 00:01:27.329 CC lib/nvme/nvme_ns.o 00:01:27.329 CC lib/nvme/nvme_pcie_common.o 00:01:27.329 CC lib/nvme/nvme_pcie.o 00:01:27.329 CC lib/nvme/nvme_qpair.o 00:01:27.329 CC lib/nvme/nvme.o 00:01:27.329 CC lib/nvme/nvme_quirks.o 00:01:27.329 CC lib/nvme/nvme_transport.o 00:01:27.329 CC lib/nvme/nvme_discovery.o 00:01:27.329 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:27.329 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:27.329 CC lib/nvme/nvme_tcp.o 00:01:27.329 CC lib/nvme/nvme_opal.o 00:01:27.329 CC lib/nvme/nvme_io_msg.o 00:01:27.329 CC lib/nvme/nvme_poll_group.o 00:01:27.329 CC lib/nvme/nvme_zns.o 00:01:27.329 CC lib/nvme/nvme_stubs.o 00:01:27.329 CC lib/nvme/nvme_auth.o 00:01:27.329 CC lib/nvme/nvme_cuse.o 00:01:27.329 CC lib/nvme/nvme_rdma.o 00:01:28.264 LIB libspdk_thread.a 00:01:28.264 SO libspdk_thread.so.10.0 00:01:28.264 SYMLINK libspdk_thread.so 00:01:28.523 CC lib/virtio/virtio.o 00:01:28.523 CC lib/accel/accel.o 00:01:28.523 CC lib/blob/blobstore.o 00:01:28.523 CC lib/accel/accel_rpc.o 00:01:28.523 CC lib/virtio/virtio_vhost_user.o 00:01:28.523 CC lib/init/json_config.o 00:01:28.523 CC lib/blob/request.o 00:01:28.523 CC lib/accel/accel_sw.o 00:01:28.523 CC lib/virtio/virtio_vfio_user.o 00:01:28.523 CC lib/init/subsystem.o 00:01:28.523 CC lib/blob/zeroes.o 00:01:28.523 CC lib/virtio/virtio_pci.o 00:01:28.523 CC lib/init/subsystem_rpc.o 00:01:28.523 CC lib/blob/blob_bs_dev.o 00:01:28.523 CC lib/init/rpc.o 00:01:28.781 LIB libspdk_init.a 00:01:28.781 SO libspdk_init.so.5.0 00:01:28.781 LIB libspdk_virtio.a 00:01:28.781 SYMLINK libspdk_init.so 00:01:28.781 SO libspdk_virtio.so.7.0 00:01:29.040 SYMLINK libspdk_virtio.so 00:01:29.040 CC lib/event/app.o 00:01:29.040 CC lib/event/reactor.o 00:01:29.040 CC lib/event/log_rpc.o 00:01:29.040 CC lib/event/app_rpc.o 00:01:29.040 CC lib/event/scheduler_static.o 00:01:29.299 LIB libspdk_event.a 00:01:29.557 SO libspdk_event.so.13.0 00:01:29.557 SYMLINK libspdk_event.so 00:01:29.557 LIB libspdk_accel.a 00:01:29.557 SO libspdk_accel.so.15.0 00:01:29.557 SYMLINK libspdk_accel.so 00:01:29.557 LIB libspdk_nvme.a 00:01:29.815 SO libspdk_nvme.so.13.0 00:01:29.815 CC lib/bdev/bdev.o 00:01:29.815 CC lib/bdev/bdev_rpc.o 00:01:29.815 CC lib/bdev/bdev_zone.o 00:01:29.815 CC lib/bdev/part.o 00:01:29.815 CC lib/bdev/scsi_nvme.o 00:01:30.073 SYMLINK libspdk_nvme.so 00:01:31.446 LIB libspdk_blob.a 00:01:31.704 SO libspdk_blob.so.11.0 00:01:31.704 SYMLINK libspdk_blob.so 00:01:31.962 CC lib/lvol/lvol.o 00:01:31.962 CC lib/blobfs/blobfs.o 00:01:31.962 CC lib/blobfs/tree.o 00:01:32.219 LIB libspdk_bdev.a 00:01:32.478 SO libspdk_bdev.so.15.0 00:01:32.478 SYMLINK libspdk_bdev.so 00:01:32.478 CC lib/ublk/ublk.o 00:01:32.478 CC lib/nbd/nbd.o 00:01:32.478 CC lib/ftl/ftl_core.o 00:01:32.478 CC lib/ublk/ublk_rpc.o 00:01:32.478 CC lib/scsi/dev.o 00:01:32.478 CC lib/nbd/nbd_rpc.o 00:01:32.478 CC lib/ftl/ftl_init.o 00:01:32.478 CC lib/nvmf/ctrlr.o 00:01:32.478 CC lib/scsi/lun.o 00:01:32.478 CC lib/ftl/ftl_layout.o 00:01:32.478 CC lib/scsi/port.o 00:01:32.478 CC lib/nvmf/ctrlr_discovery.o 00:01:32.478 CC lib/ftl/ftl_debug.o 00:01:32.478 CC lib/scsi/scsi.o 00:01:32.478 CC lib/nvmf/ctrlr_bdev.o 00:01:32.478 CC lib/ftl/ftl_io.o 00:01:32.478 CC lib/scsi/scsi_bdev.o 00:01:32.478 CC lib/nvmf/subsystem.o 00:01:32.478 CC lib/scsi/scsi_pr.o 00:01:32.478 CC lib/ftl/ftl_sb.o 00:01:32.478 CC lib/nvmf/nvmf.o 00:01:32.478 CC lib/ftl/ftl_l2p.o 00:01:32.478 CC lib/scsi/scsi_rpc.o 00:01:32.478 CC lib/ftl/ftl_l2p_flat.o 00:01:32.478 CC lib/nvmf/nvmf_rpc.o 00:01:32.478 CC lib/nvmf/transport.o 00:01:32.478 CC lib/scsi/task.o 00:01:32.478 CC lib/ftl/ftl_nv_cache.o 00:01:32.478 CC lib/nvmf/tcp.o 00:01:32.478 CC lib/nvmf/rdma.o 00:01:32.478 CC lib/ftl/ftl_band.o 00:01:32.478 CC lib/ftl/ftl_band_ops.o 00:01:32.478 CC lib/ftl/ftl_writer.o 00:01:32.478 CC lib/ftl/ftl_rq.o 00:01:32.478 CC lib/ftl/ftl_reloc.o 00:01:32.478 CC lib/ftl/ftl_l2p_cache.o 00:01:32.478 CC lib/ftl/ftl_p2l.o 00:01:32.478 CC lib/ftl/mngt/ftl_mngt.o 00:01:32.741 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:32.741 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:32.741 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:32.741 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:32.741 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:32.741 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:32.741 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:32.741 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:32.741 LIB libspdk_blobfs.a 00:01:32.741 SO libspdk_blobfs.so.10.0 00:01:32.741 LIB libspdk_lvol.a 00:01:32.741 SYMLINK libspdk_blobfs.so 00:01:33.006 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:33.006 SO libspdk_lvol.so.10.0 00:01:33.006 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:33.006 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:33.006 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:33.006 CC lib/ftl/utils/ftl_conf.o 00:01:33.006 CC lib/ftl/utils/ftl_md.o 00:01:33.006 CC lib/ftl/utils/ftl_mempool.o 00:01:33.006 SYMLINK libspdk_lvol.so 00:01:33.006 CC lib/ftl/utils/ftl_bitmap.o 00:01:33.006 CC lib/ftl/utils/ftl_property.o 00:01:33.006 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:33.006 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:33.006 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:33.006 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:33.006 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:33.006 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:33.006 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:33.006 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:33.006 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:33.006 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:33.006 CC lib/ftl/base/ftl_base_dev.o 00:01:33.268 CC lib/ftl/base/ftl_base_bdev.o 00:01:33.268 CC lib/ftl/ftl_trace.o 00:01:33.268 LIB libspdk_nbd.a 00:01:33.268 SO libspdk_nbd.so.7.0 00:01:33.527 SYMLINK libspdk_nbd.so 00:01:33.527 LIB libspdk_scsi.a 00:01:33.527 SO libspdk_scsi.so.9.0 00:01:33.527 LIB libspdk_ublk.a 00:01:33.527 SYMLINK libspdk_scsi.so 00:01:33.527 SO libspdk_ublk.so.3.0 00:01:33.785 SYMLINK libspdk_ublk.so 00:01:33.785 CC lib/vhost/vhost.o 00:01:33.785 CC lib/iscsi/conn.o 00:01:33.785 CC lib/vhost/vhost_rpc.o 00:01:33.785 CC lib/iscsi/init_grp.o 00:01:33.785 CC lib/iscsi/iscsi.o 00:01:33.785 CC lib/vhost/vhost_scsi.o 00:01:33.785 CC lib/iscsi/md5.o 00:01:33.785 CC lib/vhost/vhost_blk.o 00:01:33.785 CC lib/iscsi/param.o 00:01:33.785 CC lib/vhost/rte_vhost_user.o 00:01:33.785 CC lib/iscsi/portal_grp.o 00:01:33.785 CC lib/iscsi/tgt_node.o 00:01:33.785 CC lib/iscsi/iscsi_subsystem.o 00:01:33.785 CC lib/iscsi/iscsi_rpc.o 00:01:33.785 CC lib/iscsi/task.o 00:01:34.045 LIB libspdk_ftl.a 00:01:34.045 SO libspdk_ftl.so.9.0 00:01:34.613 SYMLINK libspdk_ftl.so 00:01:34.872 LIB libspdk_vhost.a 00:01:35.131 SO libspdk_vhost.so.8.0 00:01:35.131 LIB libspdk_nvmf.a 00:01:35.131 SYMLINK libspdk_vhost.so 00:01:35.131 LIB libspdk_iscsi.a 00:01:35.131 SO libspdk_nvmf.so.18.0 00:01:35.131 SO libspdk_iscsi.so.8.0 00:01:35.390 SYMLINK libspdk_nvmf.so 00:01:35.390 SYMLINK libspdk_iscsi.so 00:01:35.648 CC module/env_dpdk/env_dpdk_rpc.o 00:01:35.648 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:35.648 CC module/accel/ioat/accel_ioat.o 00:01:35.648 CC module/sock/posix/posix.o 00:01:35.648 CC module/scheduler/gscheduler/gscheduler.o 00:01:35.648 CC module/blob/bdev/blob_bdev.o 00:01:35.648 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:35.648 CC module/keyring/file/keyring_rpc.o 00:01:35.648 CC module/accel/ioat/accel_ioat_rpc.o 00:01:35.648 CC module/accel/iaa/accel_iaa.o 00:01:35.648 CC module/keyring/file/keyring.o 00:01:35.648 CC module/accel/error/accel_error.o 00:01:35.648 CC module/accel/dsa/accel_dsa.o 00:01:35.648 CC module/accel/error/accel_error_rpc.o 00:01:35.648 CC module/accel/dsa/accel_dsa_rpc.o 00:01:35.648 CC module/accel/iaa/accel_iaa_rpc.o 00:01:35.648 LIB libspdk_env_dpdk_rpc.a 00:01:35.907 SO libspdk_env_dpdk_rpc.so.6.0 00:01:35.907 SYMLINK libspdk_env_dpdk_rpc.so 00:01:35.907 LIB libspdk_keyring_file.a 00:01:35.907 LIB libspdk_scheduler_gscheduler.a 00:01:35.907 LIB libspdk_scheduler_dpdk_governor.a 00:01:35.907 SO libspdk_scheduler_gscheduler.so.4.0 00:01:35.907 SO libspdk_keyring_file.so.1.0 00:01:35.907 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:35.907 LIB libspdk_accel_error.a 00:01:35.907 LIB libspdk_accel_ioat.a 00:01:35.907 LIB libspdk_scheduler_dynamic.a 00:01:35.907 LIB libspdk_accel_iaa.a 00:01:35.907 SO libspdk_accel_error.so.2.0 00:01:35.907 SO libspdk_scheduler_dynamic.so.4.0 00:01:35.907 SO libspdk_accel_ioat.so.6.0 00:01:35.907 SYMLINK libspdk_scheduler_gscheduler.so 00:01:35.907 SYMLINK libspdk_keyring_file.so 00:01:35.907 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:35.907 SO libspdk_accel_iaa.so.3.0 00:01:35.907 LIB libspdk_accel_dsa.a 00:01:35.907 LIB libspdk_blob_bdev.a 00:01:35.907 SYMLINK libspdk_accel_error.so 00:01:35.907 SO libspdk_accel_dsa.so.5.0 00:01:35.907 SYMLINK libspdk_scheduler_dynamic.so 00:01:35.907 SYMLINK libspdk_accel_ioat.so 00:01:35.907 SO libspdk_blob_bdev.so.11.0 00:01:35.907 SYMLINK libspdk_accel_iaa.so 00:01:36.166 SYMLINK libspdk_accel_dsa.so 00:01:36.166 SYMLINK libspdk_blob_bdev.so 00:01:36.425 CC module/blobfs/bdev/blobfs_bdev.o 00:01:36.425 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:36.425 CC module/bdev/gpt/gpt.o 00:01:36.425 CC module/bdev/error/vbdev_error.o 00:01:36.425 CC module/bdev/gpt/vbdev_gpt.o 00:01:36.425 CC module/bdev/error/vbdev_error_rpc.o 00:01:36.425 CC module/bdev/lvol/vbdev_lvol.o 00:01:36.425 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:36.425 CC module/bdev/raid/bdev_raid.o 00:01:36.425 CC module/bdev/raid/bdev_raid_rpc.o 00:01:36.425 CC module/bdev/raid/bdev_raid_sb.o 00:01:36.425 CC module/bdev/nvme/bdev_nvme.o 00:01:36.425 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:36.425 CC module/bdev/iscsi/bdev_iscsi.o 00:01:36.425 CC module/bdev/passthru/vbdev_passthru.o 00:01:36.425 CC module/bdev/raid/raid0.o 00:01:36.425 CC module/bdev/nvme/nvme_rpc.o 00:01:36.425 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:36.425 CC module/bdev/ftl/bdev_ftl.o 00:01:36.425 CC module/bdev/delay/vbdev_delay.o 00:01:36.425 CC module/bdev/raid/raid1.o 00:01:36.425 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:36.425 CC module/bdev/raid/concat.o 00:01:36.425 CC module/bdev/nvme/bdev_mdns_client.o 00:01:36.425 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:36.425 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:36.425 CC module/bdev/null/bdev_null.o 00:01:36.425 CC module/bdev/nvme/vbdev_opal.o 00:01:36.425 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:36.425 CC module/bdev/null/bdev_null_rpc.o 00:01:36.425 CC module/bdev/malloc/bdev_malloc.o 00:01:36.425 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:36.425 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:36.425 CC module/bdev/aio/bdev_aio.o 00:01:36.425 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:36.425 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:36.425 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:36.425 CC module/bdev/aio/bdev_aio_rpc.o 00:01:36.425 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:36.425 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:36.425 CC module/bdev/split/vbdev_split.o 00:01:36.425 CC module/bdev/split/vbdev_split_rpc.o 00:01:36.684 LIB libspdk_sock_posix.a 00:01:36.684 SO libspdk_sock_posix.so.6.0 00:01:36.684 LIB libspdk_blobfs_bdev.a 00:01:36.684 SO libspdk_blobfs_bdev.so.6.0 00:01:36.684 LIB libspdk_bdev_split.a 00:01:36.684 LIB libspdk_bdev_ftl.a 00:01:36.684 SYMLINK libspdk_sock_posix.so 00:01:36.684 SO libspdk_bdev_split.so.6.0 00:01:36.684 SO libspdk_bdev_ftl.so.6.0 00:01:36.684 SYMLINK libspdk_blobfs_bdev.so 00:01:36.684 LIB libspdk_bdev_iscsi.a 00:01:36.942 LIB libspdk_bdev_error.a 00:01:36.942 SO libspdk_bdev_iscsi.so.6.0 00:01:36.942 LIB libspdk_bdev_null.a 00:01:36.942 SYMLINK libspdk_bdev_split.so 00:01:36.942 SYMLINK libspdk_bdev_ftl.so 00:01:36.942 LIB libspdk_bdev_gpt.a 00:01:36.942 SO libspdk_bdev_error.so.6.0 00:01:36.942 SO libspdk_bdev_null.so.6.0 00:01:36.942 SO libspdk_bdev_gpt.so.6.0 00:01:36.942 LIB libspdk_bdev_passthru.a 00:01:36.942 LIB libspdk_bdev_malloc.a 00:01:36.942 LIB libspdk_bdev_zone_block.a 00:01:36.942 SYMLINK libspdk_bdev_iscsi.so 00:01:36.942 SO libspdk_bdev_passthru.so.6.0 00:01:36.942 LIB libspdk_bdev_aio.a 00:01:36.942 SO libspdk_bdev_zone_block.so.6.0 00:01:36.942 SO libspdk_bdev_malloc.so.6.0 00:01:36.942 SYMLINK libspdk_bdev_error.so 00:01:36.942 SYMLINK libspdk_bdev_null.so 00:01:36.942 SYMLINK libspdk_bdev_gpt.so 00:01:36.942 SO libspdk_bdev_aio.so.6.0 00:01:36.942 LIB libspdk_bdev_delay.a 00:01:36.942 SYMLINK libspdk_bdev_passthru.so 00:01:36.942 SO libspdk_bdev_delay.so.6.0 00:01:36.942 SYMLINK libspdk_bdev_zone_block.so 00:01:36.942 SYMLINK libspdk_bdev_malloc.so 00:01:36.942 SYMLINK libspdk_bdev_aio.so 00:01:36.942 LIB libspdk_bdev_lvol.a 00:01:36.942 SYMLINK libspdk_bdev_delay.so 00:01:36.942 SO libspdk_bdev_lvol.so.6.0 00:01:36.942 LIB libspdk_bdev_virtio.a 00:01:36.942 SO libspdk_bdev_virtio.so.6.0 00:01:37.200 SYMLINK libspdk_bdev_lvol.so 00:01:37.200 SYMLINK libspdk_bdev_virtio.so 00:01:37.458 LIB libspdk_bdev_raid.a 00:01:37.458 SO libspdk_bdev_raid.so.6.0 00:01:37.458 SYMLINK libspdk_bdev_raid.so 00:01:38.421 LIB libspdk_bdev_nvme.a 00:01:38.679 SO libspdk_bdev_nvme.so.7.0 00:01:38.679 SYMLINK libspdk_bdev_nvme.so 00:01:38.938 CC module/event/subsystems/iobuf/iobuf.o 00:01:38.938 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:38.938 CC module/event/subsystems/vmd/vmd.o 00:01:38.938 CC module/event/subsystems/sock/sock.o 00:01:38.938 CC module/event/subsystems/scheduler/scheduler.o 00:01:38.938 CC module/event/subsystems/keyring/keyring.o 00:01:38.938 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:38.938 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:39.196 LIB libspdk_event_sock.a 00:01:39.196 LIB libspdk_event_keyring.a 00:01:39.196 LIB libspdk_event_vhost_blk.a 00:01:39.196 LIB libspdk_event_vmd.a 00:01:39.196 LIB libspdk_event_scheduler.a 00:01:39.196 SO libspdk_event_sock.so.5.0 00:01:39.196 LIB libspdk_event_iobuf.a 00:01:39.196 SO libspdk_event_keyring.so.1.0 00:01:39.196 SO libspdk_event_vhost_blk.so.3.0 00:01:39.196 SO libspdk_event_scheduler.so.4.0 00:01:39.196 SO libspdk_event_vmd.so.6.0 00:01:39.196 SO libspdk_event_iobuf.so.3.0 00:01:39.197 SYMLINK libspdk_event_sock.so 00:01:39.197 SYMLINK libspdk_event_keyring.so 00:01:39.197 SYMLINK libspdk_event_vhost_blk.so 00:01:39.197 SYMLINK libspdk_event_scheduler.so 00:01:39.197 SYMLINK libspdk_event_vmd.so 00:01:39.197 SYMLINK libspdk_event_iobuf.so 00:01:39.455 CC module/event/subsystems/accel/accel.o 00:01:39.713 LIB libspdk_event_accel.a 00:01:39.713 SO libspdk_event_accel.so.6.0 00:01:39.713 SYMLINK libspdk_event_accel.so 00:01:39.713 CC module/event/subsystems/bdev/bdev.o 00:01:39.972 LIB libspdk_event_bdev.a 00:01:39.972 SO libspdk_event_bdev.so.6.0 00:01:39.972 SYMLINK libspdk_event_bdev.so 00:01:40.230 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:40.230 CC module/event/subsystems/scsi/scsi.o 00:01:40.230 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:40.230 CC module/event/subsystems/nbd/nbd.o 00:01:40.230 CC module/event/subsystems/ublk/ublk.o 00:01:40.488 LIB libspdk_event_ublk.a 00:01:40.488 LIB libspdk_event_nbd.a 00:01:40.488 LIB libspdk_event_scsi.a 00:01:40.488 SO libspdk_event_nbd.so.6.0 00:01:40.488 SO libspdk_event_ublk.so.3.0 00:01:40.488 SO libspdk_event_scsi.so.6.0 00:01:40.488 SYMLINK libspdk_event_nbd.so 00:01:40.488 SYMLINK libspdk_event_ublk.so 00:01:40.488 LIB libspdk_event_nvmf.a 00:01:40.488 SYMLINK libspdk_event_scsi.so 00:01:40.488 SO libspdk_event_nvmf.so.6.0 00:01:40.488 SYMLINK libspdk_event_nvmf.so 00:01:40.747 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:40.747 CC module/event/subsystems/iscsi/iscsi.o 00:01:40.747 LIB libspdk_event_vhost_scsi.a 00:01:40.747 LIB libspdk_event_iscsi.a 00:01:40.747 SO libspdk_event_vhost_scsi.so.3.0 00:01:40.747 SO libspdk_event_iscsi.so.6.0 00:01:40.747 SYMLINK libspdk_event_vhost_scsi.so 00:01:41.007 SYMLINK libspdk_event_iscsi.so 00:01:41.007 SO libspdk.so.6.0 00:01:41.007 SYMLINK libspdk.so 00:01:41.269 CC app/trace_record/trace_record.o 00:01:41.269 CC app/spdk_nvme_identify/identify.o 00:01:41.269 CXX app/trace/trace.o 00:01:41.269 TEST_HEADER include/spdk/accel.h 00:01:41.269 TEST_HEADER include/spdk/accel_module.h 00:01:41.269 TEST_HEADER include/spdk/assert.h 00:01:41.269 CC app/spdk_nvme_perf/perf.o 00:01:41.269 CC app/spdk_top/spdk_top.o 00:01:41.269 TEST_HEADER include/spdk/barrier.h 00:01:41.269 CC app/spdk_lspci/spdk_lspci.o 00:01:41.269 CC app/spdk_nvme_discover/discovery_aer.o 00:01:41.269 TEST_HEADER include/spdk/base64.h 00:01:41.269 CC test/rpc_client/rpc_client_test.o 00:01:41.269 TEST_HEADER include/spdk/bdev.h 00:01:41.269 TEST_HEADER include/spdk/bdev_module.h 00:01:41.269 TEST_HEADER include/spdk/bdev_zone.h 00:01:41.269 TEST_HEADER include/spdk/bit_array.h 00:01:41.269 TEST_HEADER include/spdk/bit_pool.h 00:01:41.269 TEST_HEADER include/spdk/blob_bdev.h 00:01:41.269 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:41.269 TEST_HEADER include/spdk/blobfs.h 00:01:41.269 TEST_HEADER include/spdk/blob.h 00:01:41.269 TEST_HEADER include/spdk/conf.h 00:01:41.269 TEST_HEADER include/spdk/config.h 00:01:41.269 TEST_HEADER include/spdk/cpuset.h 00:01:41.269 TEST_HEADER include/spdk/crc16.h 00:01:41.269 TEST_HEADER include/spdk/crc32.h 00:01:41.269 TEST_HEADER include/spdk/crc64.h 00:01:41.269 TEST_HEADER include/spdk/dif.h 00:01:41.269 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:41.269 TEST_HEADER include/spdk/dma.h 00:01:41.269 CC app/spdk_dd/spdk_dd.o 00:01:41.269 TEST_HEADER include/spdk/endian.h 00:01:41.269 CC app/nvmf_tgt/nvmf_main.o 00:01:41.269 TEST_HEADER include/spdk/env_dpdk.h 00:01:41.269 CC app/vhost/vhost.o 00:01:41.269 TEST_HEADER include/spdk/env.h 00:01:41.269 CC app/iscsi_tgt/iscsi_tgt.o 00:01:41.269 TEST_HEADER include/spdk/event.h 00:01:41.269 TEST_HEADER include/spdk/fd_group.h 00:01:41.269 TEST_HEADER include/spdk/fd.h 00:01:41.269 TEST_HEADER include/spdk/file.h 00:01:41.269 TEST_HEADER include/spdk/ftl.h 00:01:41.269 TEST_HEADER include/spdk/gpt_spec.h 00:01:41.269 TEST_HEADER include/spdk/hexlify.h 00:01:41.269 TEST_HEADER include/spdk/histogram_data.h 00:01:41.269 TEST_HEADER include/spdk/idxd.h 00:01:41.269 TEST_HEADER include/spdk/idxd_spec.h 00:01:41.269 TEST_HEADER include/spdk/init.h 00:01:41.269 TEST_HEADER include/spdk/ioat.h 00:01:41.269 TEST_HEADER include/spdk/ioat_spec.h 00:01:41.269 CC test/app/histogram_perf/histogram_perf.o 00:01:41.269 TEST_HEADER include/spdk/iscsi_spec.h 00:01:41.269 CC app/spdk_tgt/spdk_tgt.o 00:01:41.269 TEST_HEADER include/spdk/json.h 00:01:41.269 CC test/app/jsoncat/jsoncat.o 00:01:41.269 CC examples/idxd/perf/perf.o 00:01:41.269 TEST_HEADER include/spdk/jsonrpc.h 00:01:41.269 CC examples/vmd/lsvmd/lsvmd.o 00:01:41.269 CC test/env/vtophys/vtophys.o 00:01:41.269 CC examples/nvme/hello_world/hello_world.o 00:01:41.269 CC examples/sock/hello_world/hello_sock.o 00:01:41.269 TEST_HEADER include/spdk/keyring.h 00:01:41.269 CC examples/accel/perf/accel_perf.o 00:01:41.269 CC test/event/event_perf/event_perf.o 00:01:41.269 TEST_HEADER include/spdk/keyring_module.h 00:01:41.269 CC examples/util/zipf/zipf.o 00:01:41.269 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:41.269 CC test/app/stub/stub.o 00:01:41.269 TEST_HEADER include/spdk/likely.h 00:01:41.269 CC examples/nvme/reconnect/reconnect.o 00:01:41.269 TEST_HEADER include/spdk/log.h 00:01:41.269 CC examples/vmd/led/led.o 00:01:41.269 CC test/nvme/aer/aer.o 00:01:41.269 CC examples/ioat/perf/perf.o 00:01:41.269 CC examples/ioat/verify/verify.o 00:01:41.269 TEST_HEADER include/spdk/lvol.h 00:01:41.269 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:41.269 CC app/fio/nvme/fio_plugin.o 00:01:41.269 CC test/thread/poller_perf/poller_perf.o 00:01:41.269 TEST_HEADER include/spdk/memory.h 00:01:41.269 TEST_HEADER include/spdk/mmio.h 00:01:41.269 TEST_HEADER include/spdk/nbd.h 00:01:41.269 TEST_HEADER include/spdk/notify.h 00:01:41.269 TEST_HEADER include/spdk/nvme.h 00:01:41.538 TEST_HEADER include/spdk/nvme_intel.h 00:01:41.538 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:41.538 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:41.538 TEST_HEADER include/spdk/nvme_spec.h 00:01:41.538 TEST_HEADER include/spdk/nvme_zns.h 00:01:41.538 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:41.538 CC test/app/bdev_svc/bdev_svc.o 00:01:41.538 CC test/accel/dif/dif.o 00:01:41.538 CC test/bdev/bdevio/bdevio.o 00:01:41.538 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:41.538 CC test/dma/test_dma/test_dma.o 00:01:41.538 TEST_HEADER include/spdk/nvmf.h 00:01:41.538 CC examples/bdev/hello_world/hello_bdev.o 00:01:41.538 CC examples/bdev/bdevperf/bdevperf.o 00:01:41.538 CC examples/nvmf/nvmf/nvmf.o 00:01:41.538 TEST_HEADER include/spdk/nvmf_spec.h 00:01:41.538 CC examples/blob/hello_world/hello_blob.o 00:01:41.538 CC examples/thread/thread/thread_ex.o 00:01:41.538 TEST_HEADER include/spdk/nvmf_transport.h 00:01:41.538 TEST_HEADER include/spdk/opal.h 00:01:41.538 TEST_HEADER include/spdk/opal_spec.h 00:01:41.538 CC test/blobfs/mkfs/mkfs.o 00:01:41.538 TEST_HEADER include/spdk/pci_ids.h 00:01:41.538 TEST_HEADER include/spdk/pipe.h 00:01:41.538 TEST_HEADER include/spdk/queue.h 00:01:41.538 TEST_HEADER include/spdk/reduce.h 00:01:41.538 TEST_HEADER include/spdk/rpc.h 00:01:41.538 TEST_HEADER include/spdk/scheduler.h 00:01:41.538 TEST_HEADER include/spdk/scsi.h 00:01:41.538 TEST_HEADER include/spdk/scsi_spec.h 00:01:41.538 TEST_HEADER include/spdk/sock.h 00:01:41.538 TEST_HEADER include/spdk/stdinc.h 00:01:41.538 TEST_HEADER include/spdk/string.h 00:01:41.538 TEST_HEADER include/spdk/thread.h 00:01:41.538 TEST_HEADER include/spdk/trace.h 00:01:41.538 TEST_HEADER include/spdk/trace_parser.h 00:01:41.538 TEST_HEADER include/spdk/tree.h 00:01:41.538 LINK spdk_lspci 00:01:41.538 CC test/env/mem_callbacks/mem_callbacks.o 00:01:41.538 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:41.538 TEST_HEADER include/spdk/ublk.h 00:01:41.538 TEST_HEADER include/spdk/util.h 00:01:41.538 TEST_HEADER include/spdk/uuid.h 00:01:41.538 TEST_HEADER include/spdk/version.h 00:01:41.538 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:41.538 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:41.538 TEST_HEADER include/spdk/vhost.h 00:01:41.538 TEST_HEADER include/spdk/vmd.h 00:01:41.538 TEST_HEADER include/spdk/xor.h 00:01:41.538 TEST_HEADER include/spdk/zipf.h 00:01:41.538 CC test/lvol/esnap/esnap.o 00:01:41.538 CXX test/cpp_headers/accel.o 00:01:41.538 LINK rpc_client_test 00:01:41.538 LINK jsoncat 00:01:41.538 LINK histogram_perf 00:01:41.538 LINK vtophys 00:01:41.538 LINK spdk_nvme_discover 00:01:41.538 LINK lsvmd 00:01:41.538 LINK event_perf 00:01:41.538 LINK interrupt_tgt 00:01:41.799 LINK nvmf_tgt 00:01:41.799 LINK led 00:01:41.799 LINK zipf 00:01:41.799 LINK poller_perf 00:01:41.799 LINK spdk_trace_record 00:01:41.799 LINK vhost 00:01:41.799 LINK env_dpdk_post_init 00:01:41.799 LINK stub 00:01:41.799 LINK iscsi_tgt 00:01:41.799 LINK spdk_tgt 00:01:41.799 LINK bdev_svc 00:01:41.799 LINK verify 00:01:41.799 LINK hello_world 00:01:41.799 LINK ioat_perf 00:01:41.799 LINK hello_sock 00:01:41.799 LINK mkfs 00:01:41.799 LINK hello_blob 00:01:42.065 CXX test/cpp_headers/accel_module.o 00:01:42.065 LINK aer 00:01:42.065 LINK thread 00:01:42.065 LINK hello_bdev 00:01:42.065 LINK spdk_dd 00:01:42.065 CXX test/cpp_headers/assert.o 00:01:42.065 LINK idxd_perf 00:01:42.065 CC test/event/reactor/reactor.o 00:01:42.065 CXX test/cpp_headers/barrier.o 00:01:42.065 LINK nvmf 00:01:42.065 CC test/event/reactor_perf/reactor_perf.o 00:01:42.065 LINK spdk_trace 00:01:42.065 LINK reconnect 00:01:42.065 CXX test/cpp_headers/base64.o 00:01:42.065 CC examples/nvme/arbitration/arbitration.o 00:01:42.065 CC test/env/memory/memory_ut.o 00:01:42.065 LINK dif 00:01:42.065 CC test/env/pci/pci_ut.o 00:01:42.065 CC test/nvme/reset/reset.o 00:01:42.065 LINK bdevio 00:01:42.065 LINK test_dma 00:01:42.065 CC examples/nvme/hotplug/hotplug.o 00:01:42.065 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:42.327 CC examples/blob/cli/blobcli.o 00:01:42.327 CC app/fio/bdev/fio_plugin.o 00:01:42.327 CC test/nvme/sgl/sgl.o 00:01:42.327 CXX test/cpp_headers/bdev.o 00:01:42.327 CXX test/cpp_headers/bdev_module.o 00:01:42.327 CC test/event/app_repeat/app_repeat.o 00:01:42.327 LINK accel_perf 00:01:42.327 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:42.327 CXX test/cpp_headers/bdev_zone.o 00:01:42.327 CXX test/cpp_headers/bit_array.o 00:01:42.327 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:42.327 LINK nvme_fuzz 00:01:42.327 CC test/event/scheduler/scheduler.o 00:01:42.327 LINK nvme_manage 00:01:42.327 CC test/nvme/e2edp/nvme_dp.o 00:01:42.327 CXX test/cpp_headers/bit_pool.o 00:01:42.327 CC test/nvme/overhead/overhead.o 00:01:42.327 CXX test/cpp_headers/blob_bdev.o 00:01:42.327 LINK reactor 00:01:42.327 CC examples/nvme/abort/abort.o 00:01:42.327 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:42.327 LINK reactor_perf 00:01:42.327 LINK spdk_nvme 00:01:42.327 CXX test/cpp_headers/blobfs_bdev.o 00:01:42.327 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:42.327 CXX test/cpp_headers/blobfs.o 00:01:42.327 CC test/nvme/err_injection/err_injection.o 00:01:42.591 CC test/nvme/startup/startup.o 00:01:42.591 CC test/nvme/reserve/reserve.o 00:01:42.591 CXX test/cpp_headers/blob.o 00:01:42.591 CC test/nvme/simple_copy/simple_copy.o 00:01:42.591 LINK app_repeat 00:01:42.591 CC test/nvme/boot_partition/boot_partition.o 00:01:42.591 CC test/nvme/connect_stress/connect_stress.o 00:01:42.591 CXX test/cpp_headers/conf.o 00:01:42.591 CC test/nvme/compliance/nvme_compliance.o 00:01:42.591 CXX test/cpp_headers/config.o 00:01:42.591 CXX test/cpp_headers/cpuset.o 00:01:42.591 CXX test/cpp_headers/crc16.o 00:01:42.591 CXX test/cpp_headers/crc32.o 00:01:42.591 CXX test/cpp_headers/crc64.o 00:01:42.591 CXX test/cpp_headers/dif.o 00:01:42.591 CXX test/cpp_headers/dma.o 00:01:42.591 CXX test/cpp_headers/endian.o 00:01:42.591 LINK cmb_copy 00:01:42.591 LINK hotplug 00:01:42.591 CC test/nvme/fused_ordering/fused_ordering.o 00:01:42.591 CXX test/cpp_headers/env_dpdk.o 00:01:42.591 LINK reset 00:01:42.591 LINK sgl 00:01:42.591 LINK mem_callbacks 00:01:42.591 CXX test/cpp_headers/env.o 00:01:42.857 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:42.857 CXX test/cpp_headers/event.o 00:01:42.857 CC test/nvme/fdp/fdp.o 00:01:42.857 LINK scheduler 00:01:42.857 CC test/nvme/cuse/cuse.o 00:01:42.857 LINK arbitration 00:01:42.857 LINK pmr_persistence 00:01:42.857 LINK spdk_nvme_perf 00:01:42.857 CXX test/cpp_headers/fd_group.o 00:01:42.857 LINK spdk_nvme_identify 00:01:42.857 CXX test/cpp_headers/fd.o 00:01:42.857 LINK err_injection 00:01:42.857 LINK spdk_top 00:01:42.857 LINK nvme_dp 00:01:42.857 LINK bdevperf 00:01:42.857 CXX test/cpp_headers/file.o 00:01:42.857 LINK startup 00:01:42.857 LINK overhead 00:01:42.857 CXX test/cpp_headers/ftl.o 00:01:42.857 LINK pci_ut 00:01:42.857 LINK boot_partition 00:01:42.857 LINK reserve 00:01:42.857 CXX test/cpp_headers/gpt_spec.o 00:01:42.857 CXX test/cpp_headers/hexlify.o 00:01:42.857 CXX test/cpp_headers/histogram_data.o 00:01:42.857 LINK connect_stress 00:01:43.134 CXX test/cpp_headers/idxd.o 00:01:43.135 CXX test/cpp_headers/idxd_spec.o 00:01:43.135 CXX test/cpp_headers/init.o 00:01:43.135 CXX test/cpp_headers/ioat.o 00:01:43.135 CXX test/cpp_headers/ioat_spec.o 00:01:43.135 CXX test/cpp_headers/iscsi_spec.o 00:01:43.135 CXX test/cpp_headers/json.o 00:01:43.135 LINK simple_copy 00:01:43.135 CXX test/cpp_headers/jsonrpc.o 00:01:43.135 CXX test/cpp_headers/keyring.o 00:01:43.135 CXX test/cpp_headers/keyring_module.o 00:01:43.135 LINK abort 00:01:43.135 CXX test/cpp_headers/likely.o 00:01:43.135 LINK fused_ordering 00:01:43.135 CXX test/cpp_headers/log.o 00:01:43.135 CXX test/cpp_headers/lvol.o 00:01:43.135 CXX test/cpp_headers/memory.o 00:01:43.135 CXX test/cpp_headers/mmio.o 00:01:43.135 CXX test/cpp_headers/nbd.o 00:01:43.135 LINK vhost_fuzz 00:01:43.135 LINK doorbell_aers 00:01:43.135 CXX test/cpp_headers/notify.o 00:01:43.135 CXX test/cpp_headers/nvme_intel.o 00:01:43.135 CXX test/cpp_headers/nvme.o 00:01:43.135 LINK blobcli 00:01:43.135 LINK spdk_bdev 00:01:43.135 CXX test/cpp_headers/nvme_ocssd.o 00:01:43.135 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:43.135 CXX test/cpp_headers/nvme_spec.o 00:01:43.135 CXX test/cpp_headers/nvme_zns.o 00:01:43.135 CXX test/cpp_headers/nvmf_cmd.o 00:01:43.135 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:43.135 CXX test/cpp_headers/nvmf.o 00:01:43.135 LINK nvme_compliance 00:01:43.400 CXX test/cpp_headers/nvmf_spec.o 00:01:43.400 CXX test/cpp_headers/nvmf_transport.o 00:01:43.400 CXX test/cpp_headers/opal.o 00:01:43.400 CXX test/cpp_headers/opal_spec.o 00:01:43.400 CXX test/cpp_headers/pci_ids.o 00:01:43.400 CXX test/cpp_headers/pipe.o 00:01:43.400 CXX test/cpp_headers/queue.o 00:01:43.400 CXX test/cpp_headers/reduce.o 00:01:43.400 CXX test/cpp_headers/rpc.o 00:01:43.400 LINK fdp 00:01:43.400 CXX test/cpp_headers/scheduler.o 00:01:43.400 CXX test/cpp_headers/scsi.o 00:01:43.400 CXX test/cpp_headers/scsi_spec.o 00:01:43.400 CXX test/cpp_headers/sock.o 00:01:43.400 CXX test/cpp_headers/stdinc.o 00:01:43.400 CXX test/cpp_headers/thread.o 00:01:43.400 CXX test/cpp_headers/string.o 00:01:43.400 CXX test/cpp_headers/trace.o 00:01:43.400 CXX test/cpp_headers/trace_parser.o 00:01:43.400 CXX test/cpp_headers/tree.o 00:01:43.400 CXX test/cpp_headers/ublk.o 00:01:43.400 CXX test/cpp_headers/version.o 00:01:43.400 CXX test/cpp_headers/uuid.o 00:01:43.400 CXX test/cpp_headers/util.o 00:01:43.400 CXX test/cpp_headers/vfio_user_pci.o 00:01:43.400 CXX test/cpp_headers/vfio_user_spec.o 00:01:43.400 CXX test/cpp_headers/vhost.o 00:01:43.400 CXX test/cpp_headers/vmd.o 00:01:43.400 CXX test/cpp_headers/xor.o 00:01:43.400 CXX test/cpp_headers/zipf.o 00:01:43.660 LINK memory_ut 00:01:44.225 LINK cuse 00:01:44.485 LINK iscsi_fuzz 00:01:47.022 LINK esnap 00:01:47.281 00:01:47.281 real 0m47.408s 00:01:47.281 user 9m47.975s 00:01:47.281 sys 2m19.114s 00:01:47.281 17:18:13 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:47.281 17:18:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.281 ************************************ 00:01:47.281 END TEST make 00:01:47.281 ************************************ 00:01:47.281 17:18:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:47.281 17:18:13 -- pm/common@30 -- $ signal_monitor_resources TERM 00:01:47.281 17:18:13 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:01:47.281 17:18:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.281 17:18:13 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:47.281 17:18:13 -- pm/common@45 -- $ pid=1804829 00:01:47.281 17:18:13 -- pm/common@52 -- $ sudo kill -TERM 1804829 00:01:47.281 17:18:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.281 17:18:13 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:47.281 17:18:13 -- pm/common@45 -- $ pid=1804831 00:01:47.281 17:18:13 -- pm/common@52 -- $ sudo kill -TERM 1804831 00:01:47.281 17:18:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.281 17:18:13 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:47.281 17:18:13 -- pm/common@45 -- $ pid=1804830 00:01:47.281 17:18:13 -- pm/common@52 -- $ sudo kill -TERM 1804830 00:01:47.281 17:18:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.281 17:18:13 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:47.281 17:18:13 -- pm/common@45 -- $ pid=1804832 00:01:47.281 17:18:13 -- pm/common@52 -- $ sudo kill -TERM 1804832 00:01:47.540 17:18:13 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:01:47.540 17:18:13 -- nvmf/common.sh@7 -- # uname -s 00:01:47.540 17:18:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:47.540 17:18:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:47.540 17:18:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:47.540 17:18:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:47.540 17:18:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:47.540 17:18:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:47.540 17:18:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:47.540 17:18:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:47.540 17:18:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:47.540 17:18:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:47.540 17:18:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:47.540 17:18:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:47.540 17:18:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:47.540 17:18:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:47.540 17:18:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:47.540 17:18:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:47.540 17:18:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:47.540 17:18:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:47.540 17:18:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:47.540 17:18:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:47.540 17:18:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.540 17:18:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.540 17:18:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.540 17:18:13 -- paths/export.sh@5 -- # export PATH 00:01:47.540 17:18:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.540 17:18:13 -- nvmf/common.sh@47 -- # : 0 00:01:47.540 17:18:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:47.540 17:18:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:47.540 17:18:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:47.540 17:18:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:47.540 17:18:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:47.540 17:18:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:47.540 17:18:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:47.540 17:18:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:47.540 17:18:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:47.540 17:18:13 -- spdk/autotest.sh@32 -- # uname -s 00:01:47.540 17:18:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:47.540 17:18:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:47.540 17:18:13 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:01:47.540 17:18:13 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:47.540 17:18:13 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:01:47.540 17:18:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:47.540 17:18:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:47.540 17:18:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:47.540 17:18:13 -- spdk/autotest.sh@48 -- # udevadm_pid=1859548 00:01:47.540 17:18:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:47.540 17:18:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:47.540 17:18:13 -- pm/common@17 -- # local monitor 00:01:47.540 17:18:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.540 17:18:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1859549 00:01:47.540 17:18:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.540 17:18:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1859551 00:01:47.540 17:18:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.540 17:18:13 -- pm/common@21 -- # date +%s 00:01:47.540 17:18:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1859555 00:01:47.540 17:18:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.540 17:18:13 -- pm/common@21 -- # date +%s 00:01:47.540 17:18:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1859558 00:01:47.540 17:18:13 -- pm/common@21 -- # date +%s 00:01:47.540 17:18:13 -- pm/common@26 -- # sleep 1 00:01:47.540 17:18:13 -- pm/common@21 -- # date +%s 00:01:47.540 17:18:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713453493 00:01:47.540 17:18:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713453493 00:01:47.540 17:18:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713453493 00:01:47.540 17:18:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713453493 00:01:47.540 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713453493_collect-bmc-pm.bmc.pm.log 00:01:47.540 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713453493_collect-vmstat.pm.log 00:01:47.540 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713453493_collect-cpu-temp.pm.log 00:01:47.540 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713453493_collect-cpu-load.pm.log 00:01:48.480 17:18:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:48.480 17:18:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:48.480 17:18:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:48.480 17:18:14 -- common/autotest_common.sh@10 -- # set +x 00:01:48.480 17:18:14 -- spdk/autotest.sh@59 -- # create_test_list 00:01:48.480 17:18:14 -- common/autotest_common.sh@734 -- # xtrace_disable 00:01:48.480 17:18:14 -- common/autotest_common.sh@10 -- # set +x 00:01:48.480 17:18:14 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:01:48.480 17:18:14 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:48.480 17:18:14 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:48.480 17:18:14 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:48.480 17:18:14 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:48.480 17:18:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:48.480 17:18:14 -- common/autotest_common.sh@1441 -- # uname 00:01:48.480 17:18:14 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:01:48.480 17:18:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:48.480 17:18:14 -- common/autotest_common.sh@1461 -- # uname 00:01:48.480 17:18:14 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:01:48.480 17:18:14 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:48.480 17:18:14 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:48.480 17:18:14 -- spdk/autotest.sh@72 -- # hash lcov 00:01:48.480 17:18:14 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:48.480 17:18:14 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:48.480 --rc lcov_branch_coverage=1 00:01:48.480 --rc lcov_function_coverage=1 00:01:48.480 --rc genhtml_branch_coverage=1 00:01:48.480 --rc genhtml_function_coverage=1 00:01:48.480 --rc genhtml_legend=1 00:01:48.480 --rc geninfo_all_blocks=1 00:01:48.480 ' 00:01:48.480 17:18:14 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:48.480 --rc lcov_branch_coverage=1 00:01:48.480 --rc lcov_function_coverage=1 00:01:48.480 --rc genhtml_branch_coverage=1 00:01:48.480 --rc genhtml_function_coverage=1 00:01:48.480 --rc genhtml_legend=1 00:01:48.480 --rc geninfo_all_blocks=1 00:01:48.480 ' 00:01:48.480 17:18:14 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:48.480 --rc lcov_branch_coverage=1 00:01:48.480 --rc lcov_function_coverage=1 00:01:48.480 --rc genhtml_branch_coverage=1 00:01:48.480 --rc genhtml_function_coverage=1 00:01:48.480 --rc genhtml_legend=1 00:01:48.480 --rc geninfo_all_blocks=1 00:01:48.480 --no-external' 00:01:48.480 17:18:14 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:48.480 --rc lcov_branch_coverage=1 00:01:48.480 --rc lcov_function_coverage=1 00:01:48.480 --rc genhtml_branch_coverage=1 00:01:48.480 --rc genhtml_function_coverage=1 00:01:48.480 --rc genhtml_legend=1 00:01:48.480 --rc geninfo_all_blocks=1 00:01:48.480 --no-external' 00:01:48.480 17:18:14 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:48.739 lcov: LCOV version 1.14 00:01:48.739 17:18:15 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:00.946 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:00.946 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:01.884 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:01.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:01.884 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:01.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:01.884 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:01.884 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:20.053 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:20.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:20.054 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:20.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:20.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:20.055 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:20.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:20.055 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:20.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:20.055 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:20.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:20.055 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:20.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:20.055 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:20.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:20.055 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:20.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:20.314 17:18:46 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:20.314 17:18:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:20.314 17:18:46 -- common/autotest_common.sh@10 -- # set +x 00:02:20.314 17:18:46 -- spdk/autotest.sh@91 -- # rm -f 00:02:20.314 17:18:46 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:21.691 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:21.691 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:21.691 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:21.691 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:21.691 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:21.691 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:21.691 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:21.691 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:21.691 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:21.691 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:21.691 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:21.691 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:21.691 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:21.691 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:21.691 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:21.691 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:21.691 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:21.691 17:18:48 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:21.691 17:18:48 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:21.691 17:18:48 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:21.691 17:18:48 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:21.691 17:18:48 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:21.691 17:18:48 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:21.691 17:18:48 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:21.691 17:18:48 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:21.691 17:18:48 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:21.691 17:18:48 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:21.691 17:18:48 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:21.691 17:18:48 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:21.691 17:18:48 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:21.691 17:18:48 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:21.691 17:18:48 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:21.691 No valid GPT data, bailing 00:02:21.691 17:18:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:21.691 17:18:48 -- scripts/common.sh@391 -- # pt= 00:02:21.691 17:18:48 -- scripts/common.sh@392 -- # return 1 00:02:21.691 17:18:48 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:21.691 1+0 records in 00:02:21.691 1+0 records out 00:02:21.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00235032 s, 446 MB/s 00:02:21.691 17:18:48 -- spdk/autotest.sh@118 -- # sync 00:02:21.691 17:18:48 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:21.691 17:18:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:21.691 17:18:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:23.600 17:18:49 -- spdk/autotest.sh@124 -- # uname -s 00:02:23.600 17:18:49 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:23.600 17:18:49 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:23.600 17:18:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:23.600 17:18:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:23.600 17:18:49 -- common/autotest_common.sh@10 -- # set +x 00:02:23.600 ************************************ 00:02:23.600 START TEST setup.sh 00:02:23.600 ************************************ 00:02:23.600 17:18:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:02:23.600 * Looking for test storage... 00:02:23.600 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:23.600 17:18:50 -- setup/test-setup.sh@10 -- # uname -s 00:02:23.600 17:18:50 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:23.600 17:18:50 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:23.600 17:18:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:23.600 17:18:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:23.600 17:18:50 -- common/autotest_common.sh@10 -- # set +x 00:02:23.858 ************************************ 00:02:23.858 START TEST acl 00:02:23.858 ************************************ 00:02:23.858 17:18:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:02:23.858 * Looking for test storage... 00:02:23.858 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:23.858 17:18:50 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:23.859 17:18:50 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:23.859 17:18:50 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:23.859 17:18:50 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:23.859 17:18:50 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:23.859 17:18:50 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:23.859 17:18:50 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:23.859 17:18:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:23.859 17:18:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:23.859 17:18:50 -- setup/acl.sh@12 -- # devs=() 00:02:23.859 17:18:50 -- setup/acl.sh@12 -- # declare -a devs 00:02:23.859 17:18:50 -- setup/acl.sh@13 -- # drivers=() 00:02:23.859 17:18:50 -- setup/acl.sh@13 -- # declare -A drivers 00:02:23.859 17:18:50 -- setup/acl.sh@51 -- # setup reset 00:02:23.859 17:18:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:23.859 17:18:50 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:25.235 17:18:51 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:25.235 17:18:51 -- setup/acl.sh@16 -- # local dev driver 00:02:25.235 17:18:51 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:25.235 17:18:51 -- setup/acl.sh@15 -- # setup output status 00:02:25.235 17:18:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:25.235 17:18:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:26.172 Hugepages 00:02:26.172 node hugesize free / total 00:02:26.172 17:18:52 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:26.172 17:18:52 -- setup/acl.sh@19 -- # continue 00:02:26.172 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.431 17:18:52 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:26.431 17:18:52 -- setup/acl.sh@19 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 00:02:26.432 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # continue 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:26.432 17:18:52 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:26.432 17:18:52 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:26.432 17:18:52 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:26.432 17:18:52 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:26.432 17:18:52 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:26.432 17:18:52 -- setup/acl.sh@54 -- # run_test denied denied 00:02:26.432 17:18:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:26.432 17:18:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:26.432 17:18:52 -- common/autotest_common.sh@10 -- # set +x 00:02:26.432 ************************************ 00:02:26.432 START TEST denied 00:02:26.432 ************************************ 00:02:26.432 17:18:52 -- common/autotest_common.sh@1111 -- # denied 00:02:26.432 17:18:52 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:26.432 17:18:52 -- setup/acl.sh@38 -- # setup output config 00:02:26.432 17:18:52 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:26.432 17:18:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:26.432 17:18:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:27.811 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:27.811 17:18:54 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:27.811 17:18:54 -- setup/acl.sh@28 -- # local dev driver 00:02:27.811 17:18:54 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:27.811 17:18:54 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:27.811 17:18:54 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:27.811 17:18:54 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:27.811 17:18:54 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:27.811 17:18:54 -- setup/acl.sh@41 -- # setup reset 00:02:27.811 17:18:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:27.811 17:18:54 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:30.345 00:02:30.345 real 0m3.515s 00:02:30.345 user 0m1.044s 00:02:30.345 sys 0m1.614s 00:02:30.345 17:18:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:30.345 17:18:56 -- common/autotest_common.sh@10 -- # set +x 00:02:30.345 ************************************ 00:02:30.345 END TEST denied 00:02:30.345 ************************************ 00:02:30.345 17:18:56 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:30.345 17:18:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:30.345 17:18:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:30.345 17:18:56 -- common/autotest_common.sh@10 -- # set +x 00:02:30.345 ************************************ 00:02:30.345 START TEST allowed 00:02:30.345 ************************************ 00:02:30.345 17:18:56 -- common/autotest_common.sh@1111 -- # allowed 00:02:30.345 17:18:56 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:30.345 17:18:56 -- setup/acl.sh@45 -- # setup output config 00:02:30.345 17:18:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.345 17:18:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:30.345 17:18:56 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:32.882 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:32.882 17:18:58 -- setup/acl.sh@47 -- # verify 00:02:32.882 17:18:58 -- setup/acl.sh@28 -- # local dev driver 00:02:32.882 17:18:58 -- setup/acl.sh@48 -- # setup reset 00:02:32.882 17:18:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:32.882 17:18:58 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:33.820 00:02:33.820 real 0m3.773s 00:02:33.820 user 0m1.054s 00:02:33.820 sys 0m1.646s 00:02:33.820 17:19:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:33.820 17:19:00 -- common/autotest_common.sh@10 -- # set +x 00:02:33.820 ************************************ 00:02:33.820 END TEST allowed 00:02:33.820 ************************************ 00:02:33.820 00:02:33.820 real 0m10.213s 00:02:33.820 user 0m3.249s 00:02:33.820 sys 0m5.092s 00:02:33.820 17:19:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:33.820 17:19:00 -- common/autotest_common.sh@10 -- # set +x 00:02:33.820 ************************************ 00:02:33.820 END TEST acl 00:02:33.820 ************************************ 00:02:34.079 17:19:00 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:02:34.079 17:19:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:34.079 17:19:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:34.079 17:19:00 -- common/autotest_common.sh@10 -- # set +x 00:02:34.079 ************************************ 00:02:34.080 START TEST hugepages 00:02:34.080 ************************************ 00:02:34.080 17:19:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:02:34.080 * Looking for test storage... 00:02:34.080 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:34.080 17:19:00 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:34.080 17:19:00 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:34.080 17:19:00 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:34.080 17:19:00 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:34.080 17:19:00 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:34.080 17:19:00 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:34.080 17:19:00 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:34.080 17:19:00 -- setup/common.sh@18 -- # local node= 00:02:34.080 17:19:00 -- setup/common.sh@19 -- # local var val 00:02:34.080 17:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:02:34.080 17:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.080 17:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.080 17:19:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.080 17:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.080 17:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 38034656 kB' 'MemAvailable: 41711180 kB' 'Buffers: 2696 kB' 'Cached: 15802624 kB' 'SwapCached: 0 kB' 'Active: 12745972 kB' 'Inactive: 3488624 kB' 'Active(anon): 12161256 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 433196 kB' 'Mapped: 174092 kB' 'Shmem: 11731980 kB' 'KReclaimable: 194048 kB' 'Slab: 562436 kB' 'SReclaimable: 194048 kB' 'SUnreclaim: 368388 kB' 'KernelStack: 13040 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 13314812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.080 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.080 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # continue 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.081 17:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.081 17:19:00 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:34.081 17:19:00 -- setup/common.sh@33 -- # echo 2048 00:02:34.081 17:19:00 -- setup/common.sh@33 -- # return 0 00:02:34.081 17:19:00 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:34.081 17:19:00 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:34.081 17:19:00 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:34.081 17:19:00 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:34.081 17:19:00 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:34.081 17:19:00 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:34.081 17:19:00 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:34.081 17:19:00 -- setup/hugepages.sh@207 -- # get_nodes 00:02:34.081 17:19:00 -- setup/hugepages.sh@27 -- # local node 00:02:34.081 17:19:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.081 17:19:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:34.081 17:19:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.081 17:19:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:34.081 17:19:00 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:34.081 17:19:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:34.081 17:19:00 -- setup/hugepages.sh@208 -- # clear_hp 00:02:34.081 17:19:00 -- setup/hugepages.sh@37 -- # local node hp 00:02:34.081 17:19:00 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:34.081 17:19:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:34.081 17:19:00 -- setup/hugepages.sh@41 -- # echo 0 00:02:34.081 17:19:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:34.081 17:19:00 -- setup/hugepages.sh@41 -- # echo 0 00:02:34.081 17:19:00 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:34.081 17:19:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:34.081 17:19:00 -- setup/hugepages.sh@41 -- # echo 0 00:02:34.081 17:19:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:34.081 17:19:00 -- setup/hugepages.sh@41 -- # echo 0 00:02:34.081 17:19:00 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:34.081 17:19:00 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:34.081 17:19:00 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:34.081 17:19:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:34.081 17:19:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:34.081 17:19:00 -- common/autotest_common.sh@10 -- # set +x 00:02:34.340 ************************************ 00:02:34.340 START TEST default_setup 00:02:34.340 ************************************ 00:02:34.340 17:19:00 -- common/autotest_common.sh@1111 -- # default_setup 00:02:34.340 17:19:00 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:34.340 17:19:00 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:34.340 17:19:00 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:34.340 17:19:00 -- setup/hugepages.sh@51 -- # shift 00:02:34.340 17:19:00 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:34.340 17:19:00 -- setup/hugepages.sh@52 -- # local node_ids 00:02:34.340 17:19:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:34.340 17:19:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:34.340 17:19:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:34.340 17:19:00 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:34.340 17:19:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:34.340 17:19:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:34.340 17:19:00 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:34.340 17:19:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:34.340 17:19:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:34.340 17:19:00 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:34.340 17:19:00 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:34.340 17:19:00 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:34.340 17:19:00 -- setup/hugepages.sh@73 -- # return 0 00:02:34.340 17:19:00 -- setup/hugepages.sh@137 -- # setup output 00:02:34.340 17:19:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.340 17:19:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:35.276 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:35.276 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:35.535 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:35.535 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:35.535 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:35.535 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:35.535 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:35.535 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:35.535 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:35.535 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:35.535 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:35.535 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:35.535 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:35.535 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:35.535 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:35.535 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:36.497 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:36.497 17:19:02 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:36.497 17:19:02 -- setup/hugepages.sh@89 -- # local node 00:02:36.497 17:19:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:36.497 17:19:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:36.497 17:19:02 -- setup/hugepages.sh@92 -- # local surp 00:02:36.497 17:19:02 -- setup/hugepages.sh@93 -- # local resv 00:02:36.497 17:19:02 -- setup/hugepages.sh@94 -- # local anon 00:02:36.497 17:19:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:36.497 17:19:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:36.497 17:19:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:36.497 17:19:02 -- setup/common.sh@18 -- # local node= 00:02:36.497 17:19:02 -- setup/common.sh@19 -- # local var val 00:02:36.497 17:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:02:36.497 17:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.497 17:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.497 17:19:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.497 17:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.497 17:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.497 17:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40125040 kB' 'MemAvailable: 43801500 kB' 'Buffers: 2696 kB' 'Cached: 15802892 kB' 'SwapCached: 0 kB' 'Active: 12771024 kB' 'Inactive: 3488624 kB' 'Active(anon): 12186308 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457164 kB' 'Mapped: 174712 kB' 'Shmem: 11732248 kB' 'KReclaimable: 193920 kB' 'Slab: 561720 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367800 kB' 'KernelStack: 13408 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13341364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196872 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # continue 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # continue 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # continue 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # continue 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # continue 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # continue 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # continue 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # continue 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # continue 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # continue 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # continue 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # continue 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # continue 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.497 17:19:02 -- setup/common.sh@32 -- # continue 00:02:36.497 17:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.497 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.497 17:19:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.497 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.498 17:19:03 -- setup/common.sh@33 -- # echo 0 00:02:36.498 17:19:03 -- setup/common.sh@33 -- # return 0 00:02:36.498 17:19:03 -- setup/hugepages.sh@97 -- # anon=0 00:02:36.498 17:19:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:36.498 17:19:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.498 17:19:03 -- setup/common.sh@18 -- # local node= 00:02:36.498 17:19:03 -- setup/common.sh@19 -- # local var val 00:02:36.498 17:19:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:36.498 17:19:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.498 17:19:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.498 17:19:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.498 17:19:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.498 17:19:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40128996 kB' 'MemAvailable: 43805456 kB' 'Buffers: 2696 kB' 'Cached: 15802896 kB' 'SwapCached: 0 kB' 'Active: 12770956 kB' 'Inactive: 3488624 kB' 'Active(anon): 12186240 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457392 kB' 'Mapped: 175092 kB' 'Shmem: 11732252 kB' 'KReclaimable: 193920 kB' 'Slab: 561824 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367904 kB' 'KernelStack: 13072 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13341380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196744 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.498 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.498 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.499 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.499 17:19:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.762 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.762 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.762 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.762 17:19:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.762 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.762 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.762 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.762 17:19:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.762 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.762 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.762 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.762 17:19:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.762 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.762 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.762 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.762 17:19:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.762 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.762 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.762 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.762 17:19:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.762 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.762 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.762 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.762 17:19:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.762 17:19:03 -- setup/common.sh@33 -- # echo 0 00:02:36.762 17:19:03 -- setup/common.sh@33 -- # return 0 00:02:36.762 17:19:03 -- setup/hugepages.sh@99 -- # surp=0 00:02:36.762 17:19:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:36.762 17:19:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:36.762 17:19:03 -- setup/common.sh@18 -- # local node= 00:02:36.762 17:19:03 -- setup/common.sh@19 -- # local var val 00:02:36.762 17:19:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:36.762 17:19:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.762 17:19:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.762 17:19:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.762 17:19:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.762 17:19:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.762 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.762 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40136068 kB' 'MemAvailable: 43812528 kB' 'Buffers: 2696 kB' 'Cached: 15802908 kB' 'SwapCached: 0 kB' 'Active: 12766312 kB' 'Inactive: 3488624 kB' 'Active(anon): 12181596 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 452664 kB' 'Mapped: 174572 kB' 'Shmem: 11732264 kB' 'KReclaimable: 193920 kB' 'Slab: 561696 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367776 kB' 'KernelStack: 12928 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13338476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.763 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.763 17:19:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.764 17:19:03 -- setup/common.sh@33 -- # echo 0 00:02:36.764 17:19:03 -- setup/common.sh@33 -- # return 0 00:02:36.764 17:19:03 -- setup/hugepages.sh@100 -- # resv=0 00:02:36.764 17:19:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:36.764 nr_hugepages=1024 00:02:36.764 17:19:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:36.764 resv_hugepages=0 00:02:36.764 17:19:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:36.764 surplus_hugepages=0 00:02:36.764 17:19:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:36.764 anon_hugepages=0 00:02:36.764 17:19:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.764 17:19:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:36.764 17:19:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:36.764 17:19:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:36.764 17:19:03 -- setup/common.sh@18 -- # local node= 00:02:36.764 17:19:03 -- setup/common.sh@19 -- # local var val 00:02:36.764 17:19:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:36.764 17:19:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.764 17:19:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.764 17:19:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.764 17:19:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.764 17:19:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40132880 kB' 'MemAvailable: 43809340 kB' 'Buffers: 2696 kB' 'Cached: 15802928 kB' 'SwapCached: 0 kB' 'Active: 12769192 kB' 'Inactive: 3488624 kB' 'Active(anon): 12184476 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455548 kB' 'Mapped: 174572 kB' 'Shmem: 11732284 kB' 'KReclaimable: 193920 kB' 'Slab: 561696 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367776 kB' 'KernelStack: 12944 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13341776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196776 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.764 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.764 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.765 17:19:03 -- setup/common.sh@33 -- # echo 1024 00:02:36.765 17:19:03 -- setup/common.sh@33 -- # return 0 00:02:36.765 17:19:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.765 17:19:03 -- setup/hugepages.sh@112 -- # get_nodes 00:02:36.765 17:19:03 -- setup/hugepages.sh@27 -- # local node 00:02:36.765 17:19:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.765 17:19:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:36.765 17:19:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.765 17:19:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:36.765 17:19:03 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:36.765 17:19:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:36.765 17:19:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:36.765 17:19:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:36.765 17:19:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:36.765 17:19:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.765 17:19:03 -- setup/common.sh@18 -- # local node=0 00:02:36.765 17:19:03 -- setup/common.sh@19 -- # local var val 00:02:36.765 17:19:03 -- setup/common.sh@20 -- # local mem_f mem 00:02:36.765 17:19:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.765 17:19:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:36.765 17:19:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:36.765 17:19:03 -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.765 17:19:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19131100 kB' 'MemUsed: 13745840 kB' 'SwapCached: 0 kB' 'Active: 7066320 kB' 'Inactive: 3327648 kB' 'Active(anon): 6833844 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3327648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10236740 kB' 'Mapped: 108100 kB' 'AnonPages: 160448 kB' 'Shmem: 6676616 kB' 'KernelStack: 7656 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104008 kB' 'Slab: 332820 kB' 'SReclaimable: 104008 kB' 'SUnreclaim: 228812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.765 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.765 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # continue 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:36.766 17:19:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:36.766 17:19:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.766 17:19:03 -- setup/common.sh@33 -- # echo 0 00:02:36.766 17:19:03 -- setup/common.sh@33 -- # return 0 00:02:36.766 17:19:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:36.766 17:19:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:36.766 17:19:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:36.766 17:19:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:36.766 17:19:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:36.766 node0=1024 expecting 1024 00:02:36.766 17:19:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:36.766 00:02:36.766 real 0m2.435s 00:02:36.766 user 0m0.592s 00:02:36.766 sys 0m0.810s 00:02:36.766 17:19:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:36.766 17:19:03 -- common/autotest_common.sh@10 -- # set +x 00:02:36.766 ************************************ 00:02:36.766 END TEST default_setup 00:02:36.766 ************************************ 00:02:36.766 17:19:03 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:36.766 17:19:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:36.766 17:19:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:36.766 17:19:03 -- common/autotest_common.sh@10 -- # set +x 00:02:36.766 ************************************ 00:02:36.766 START TEST per_node_1G_alloc 00:02:36.766 ************************************ 00:02:36.766 17:19:03 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:02:36.766 17:19:03 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:36.766 17:19:03 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:36.766 17:19:03 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:36.766 17:19:03 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:36.766 17:19:03 -- setup/hugepages.sh@51 -- # shift 00:02:36.766 17:19:03 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:36.766 17:19:03 -- setup/hugepages.sh@52 -- # local node_ids 00:02:36.766 17:19:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:36.767 17:19:03 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:36.767 17:19:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:36.767 17:19:03 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:36.767 17:19:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:36.767 17:19:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:36.767 17:19:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:36.767 17:19:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:36.767 17:19:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:36.767 17:19:03 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:36.767 17:19:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:36.767 17:19:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:36.767 17:19:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:36.767 17:19:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:36.767 17:19:03 -- setup/hugepages.sh@73 -- # return 0 00:02:36.767 17:19:03 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:36.767 17:19:03 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:36.767 17:19:03 -- setup/hugepages.sh@146 -- # setup output 00:02:36.767 17:19:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.767 17:19:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:38.149 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:38.149 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:38.149 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:38.149 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:38.149 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:38.149 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:38.149 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:38.149 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:38.149 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:38.149 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:38.150 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:38.150 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:38.150 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:38.150 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:38.150 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:38.150 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:38.150 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:38.150 17:19:04 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:38.150 17:19:04 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:38.150 17:19:04 -- setup/hugepages.sh@89 -- # local node 00:02:38.150 17:19:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:38.150 17:19:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:38.150 17:19:04 -- setup/hugepages.sh@92 -- # local surp 00:02:38.150 17:19:04 -- setup/hugepages.sh@93 -- # local resv 00:02:38.150 17:19:04 -- setup/hugepages.sh@94 -- # local anon 00:02:38.150 17:19:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:38.150 17:19:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:38.150 17:19:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:38.150 17:19:04 -- setup/common.sh@18 -- # local node= 00:02:38.150 17:19:04 -- setup/common.sh@19 -- # local var val 00:02:38.150 17:19:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:38.150 17:19:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.150 17:19:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.150 17:19:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.150 17:19:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.150 17:19:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40126204 kB' 'MemAvailable: 43802664 kB' 'Buffers: 2696 kB' 'Cached: 15802984 kB' 'SwapCached: 0 kB' 'Active: 12764760 kB' 'Inactive: 3488624 kB' 'Active(anon): 12180044 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 451092 kB' 'Mapped: 174180 kB' 'Shmem: 11732340 kB' 'KReclaimable: 193920 kB' 'Slab: 561604 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367684 kB' 'KernelStack: 13008 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13335828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196852 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.150 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.150 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:38.151 17:19:04 -- setup/common.sh@33 -- # echo 0 00:02:38.151 17:19:04 -- setup/common.sh@33 -- # return 0 00:02:38.151 17:19:04 -- setup/hugepages.sh@97 -- # anon=0 00:02:38.151 17:19:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:38.151 17:19:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.151 17:19:04 -- setup/common.sh@18 -- # local node= 00:02:38.151 17:19:04 -- setup/common.sh@19 -- # local var val 00:02:38.151 17:19:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:38.151 17:19:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.151 17:19:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.151 17:19:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.151 17:19:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.151 17:19:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40128936 kB' 'MemAvailable: 43805396 kB' 'Buffers: 2696 kB' 'Cached: 15802988 kB' 'SwapCached: 0 kB' 'Active: 12764832 kB' 'Inactive: 3488624 kB' 'Active(anon): 12180116 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 451120 kB' 'Mapped: 174180 kB' 'Shmem: 11732344 kB' 'KReclaimable: 193920 kB' 'Slab: 561604 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367684 kB' 'KernelStack: 13008 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13335840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196820 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.151 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.151 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.152 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.152 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.153 17:19:04 -- setup/common.sh@33 -- # echo 0 00:02:38.153 17:19:04 -- setup/common.sh@33 -- # return 0 00:02:38.153 17:19:04 -- setup/hugepages.sh@99 -- # surp=0 00:02:38.153 17:19:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:38.153 17:19:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:38.153 17:19:04 -- setup/common.sh@18 -- # local node= 00:02:38.153 17:19:04 -- setup/common.sh@19 -- # local var val 00:02:38.153 17:19:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:38.153 17:19:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.153 17:19:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.153 17:19:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.153 17:19:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.153 17:19:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40129188 kB' 'MemAvailable: 43805648 kB' 'Buffers: 2696 kB' 'Cached: 15802996 kB' 'SwapCached: 0 kB' 'Active: 12764476 kB' 'Inactive: 3488624 kB' 'Active(anon): 12179760 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450696 kB' 'Mapped: 174160 kB' 'Shmem: 11732352 kB' 'KReclaimable: 193920 kB' 'Slab: 561596 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367676 kB' 'KernelStack: 12976 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13335856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196820 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.153 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.153 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.154 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.154 17:19:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:38.154 17:19:04 -- setup/common.sh@33 -- # echo 0 00:02:38.154 17:19:04 -- setup/common.sh@33 -- # return 0 00:02:38.154 17:19:04 -- setup/hugepages.sh@100 -- # resv=0 00:02:38.154 17:19:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:38.154 nr_hugepages=1024 00:02:38.154 17:19:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:38.154 resv_hugepages=0 00:02:38.154 17:19:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:38.154 surplus_hugepages=0 00:02:38.154 17:19:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:38.154 anon_hugepages=0 00:02:38.155 17:19:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.155 17:19:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:38.155 17:19:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:38.155 17:19:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:38.155 17:19:04 -- setup/common.sh@18 -- # local node= 00:02:38.155 17:19:04 -- setup/common.sh@19 -- # local var val 00:02:38.155 17:19:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:38.155 17:19:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.155 17:19:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:38.155 17:19:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:38.155 17:19:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.155 17:19:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.155 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 17:19:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40129520 kB' 'MemAvailable: 43805980 kB' 'Buffers: 2696 kB' 'Cached: 15803012 kB' 'SwapCached: 0 kB' 'Active: 12764204 kB' 'Inactive: 3488624 kB' 'Active(anon): 12179488 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450440 kB' 'Mapped: 174160 kB' 'Shmem: 11732368 kB' 'KReclaimable: 193920 kB' 'Slab: 561656 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367736 kB' 'KernelStack: 12992 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13335868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196820 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:38.155 17:19:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.155 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.155 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 17:19:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.155 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.155 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 17:19:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.155 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.155 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 17:19:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.155 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.155 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 17:19:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.155 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.155 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.155 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.155 17:19:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.156 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.156 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.157 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.157 17:19:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:38.157 17:19:04 -- setup/common.sh@33 -- # echo 1024 00:02:38.157 17:19:04 -- setup/common.sh@33 -- # return 0 00:02:38.157 17:19:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:38.157 17:19:04 -- setup/hugepages.sh@112 -- # get_nodes 00:02:38.157 17:19:04 -- setup/hugepages.sh@27 -- # local node 00:02:38.157 17:19:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.157 17:19:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:38.157 17:19:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:38.157 17:19:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:38.157 17:19:04 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:38.157 17:19:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:38.157 17:19:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.157 17:19:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.157 17:19:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:38.157 17:19:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.158 17:19:04 -- setup/common.sh@18 -- # local node=0 00:02:38.158 17:19:04 -- setup/common.sh@19 -- # local var val 00:02:38.158 17:19:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:38.158 17:19:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.158 17:19:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:38.158 17:19:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:38.158 17:19:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.158 17:19:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20185140 kB' 'MemUsed: 12691800 kB' 'SwapCached: 0 kB' 'Active: 7066576 kB' 'Inactive: 3327648 kB' 'Active(anon): 6834100 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3327648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10236820 kB' 'Mapped: 107972 kB' 'AnonPages: 160688 kB' 'Shmem: 6676696 kB' 'KernelStack: 7736 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104008 kB' 'Slab: 332764 kB' 'SReclaimable: 104008 kB' 'SUnreclaim: 228756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.158 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.158 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@33 -- # echo 0 00:02:38.159 17:19:04 -- setup/common.sh@33 -- # return 0 00:02:38.159 17:19:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.159 17:19:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:38.159 17:19:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:38.159 17:19:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:38.159 17:19:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:38.159 17:19:04 -- setup/common.sh@18 -- # local node=1 00:02:38.159 17:19:04 -- setup/common.sh@19 -- # local var val 00:02:38.159 17:19:04 -- setup/common.sh@20 -- # local mem_f mem 00:02:38.159 17:19:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:38.159 17:19:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:38.159 17:19:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:38.159 17:19:04 -- setup/common.sh@28 -- # mapfile -t mem 00:02:38.159 17:19:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664776 kB' 'MemFree: 19944664 kB' 'MemUsed: 7720112 kB' 'SwapCached: 0 kB' 'Active: 5697644 kB' 'Inactive: 160976 kB' 'Active(anon): 5345404 kB' 'Inactive(anon): 0 kB' 'Active(file): 352240 kB' 'Inactive(file): 160976 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5568904 kB' 'Mapped: 66188 kB' 'AnonPages: 289760 kB' 'Shmem: 5055688 kB' 'KernelStack: 5256 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89912 kB' 'Slab: 228892 kB' 'SReclaimable: 89912 kB' 'SUnreclaim: 138980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.159 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.159 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # continue 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # IFS=': ' 00:02:38.420 17:19:04 -- setup/common.sh@31 -- # read -r var val _ 00:02:38.420 17:19:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:38.420 17:19:04 -- setup/common.sh@33 -- # echo 0 00:02:38.420 17:19:04 -- setup/common.sh@33 -- # return 0 00:02:38.420 17:19:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:38.420 17:19:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.420 17:19:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.420 17:19:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.420 17:19:04 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:38.420 node0=512 expecting 512 00:02:38.420 17:19:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:38.420 17:19:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:38.420 17:19:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:38.420 17:19:04 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:38.420 node1=512 expecting 512 00:02:38.420 17:19:04 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:38.420 00:02:38.420 real 0m1.460s 00:02:38.420 user 0m0.623s 00:02:38.420 sys 0m0.797s 00:02:38.420 17:19:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:38.420 17:19:04 -- common/autotest_common.sh@10 -- # set +x 00:02:38.420 ************************************ 00:02:38.420 END TEST per_node_1G_alloc 00:02:38.420 ************************************ 00:02:38.420 17:19:04 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:38.420 17:19:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:38.420 17:19:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:38.420 17:19:04 -- common/autotest_common.sh@10 -- # set +x 00:02:38.420 ************************************ 00:02:38.420 START TEST even_2G_alloc 00:02:38.420 ************************************ 00:02:38.420 17:19:04 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:02:38.420 17:19:04 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:38.420 17:19:04 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:38.420 17:19:04 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:38.420 17:19:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:38.420 17:19:04 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:38.420 17:19:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:38.420 17:19:04 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:38.420 17:19:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:38.420 17:19:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:38.420 17:19:04 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:38.420 17:19:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:38.420 17:19:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:38.420 17:19:04 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:38.420 17:19:04 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:38.420 17:19:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:38.420 17:19:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:38.421 17:19:04 -- setup/hugepages.sh@83 -- # : 512 00:02:38.421 17:19:04 -- setup/hugepages.sh@84 -- # : 1 00:02:38.421 17:19:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:38.421 17:19:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:38.421 17:19:04 -- setup/hugepages.sh@83 -- # : 0 00:02:38.421 17:19:04 -- setup/hugepages.sh@84 -- # : 0 00:02:38.421 17:19:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:38.421 17:19:04 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:38.421 17:19:04 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:38.421 17:19:04 -- setup/hugepages.sh@153 -- # setup output 00:02:38.421 17:19:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:38.421 17:19:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:39.801 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:39.801 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:39.801 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:39.801 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:39.801 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:39.801 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:39.801 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:39.801 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:39.801 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:39.801 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:39.801 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:39.801 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:39.801 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:39.801 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:39.801 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:39.801 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:39.801 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:39.801 17:19:06 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:39.801 17:19:06 -- setup/hugepages.sh@89 -- # local node 00:02:39.801 17:19:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:39.801 17:19:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:39.801 17:19:06 -- setup/hugepages.sh@92 -- # local surp 00:02:39.801 17:19:06 -- setup/hugepages.sh@93 -- # local resv 00:02:39.801 17:19:06 -- setup/hugepages.sh@94 -- # local anon 00:02:39.801 17:19:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:39.801 17:19:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:39.801 17:19:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:39.801 17:19:06 -- setup/common.sh@18 -- # local node= 00:02:39.801 17:19:06 -- setup/common.sh@19 -- # local var val 00:02:39.801 17:19:06 -- setup/common.sh@20 -- # local mem_f mem 00:02:39.801 17:19:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.801 17:19:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.801 17:19:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.801 17:19:06 -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.801 17:19:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40120116 kB' 'MemAvailable: 43796576 kB' 'Buffers: 2696 kB' 'Cached: 15803084 kB' 'SwapCached: 0 kB' 'Active: 12764016 kB' 'Inactive: 3488624 kB' 'Active(anon): 12179300 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450084 kB' 'Mapped: 174352 kB' 'Shmem: 11732440 kB' 'KReclaimable: 193920 kB' 'Slab: 561572 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367652 kB' 'KernelStack: 12992 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13336096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196852 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.801 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.801 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:39.802 17:19:06 -- setup/common.sh@33 -- # echo 0 00:02:39.802 17:19:06 -- setup/common.sh@33 -- # return 0 00:02:39.802 17:19:06 -- setup/hugepages.sh@97 -- # anon=0 00:02:39.802 17:19:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:39.802 17:19:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.802 17:19:06 -- setup/common.sh@18 -- # local node= 00:02:39.802 17:19:06 -- setup/common.sh@19 -- # local var val 00:02:39.802 17:19:06 -- setup/common.sh@20 -- # local mem_f mem 00:02:39.802 17:19:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.802 17:19:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.802 17:19:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.802 17:19:06 -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.802 17:19:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40120188 kB' 'MemAvailable: 43796648 kB' 'Buffers: 2696 kB' 'Cached: 15803088 kB' 'SwapCached: 0 kB' 'Active: 12764700 kB' 'Inactive: 3488624 kB' 'Active(anon): 12179984 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450888 kB' 'Mapped: 174288 kB' 'Shmem: 11732444 kB' 'KReclaimable: 193920 kB' 'Slab: 561588 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367668 kB' 'KernelStack: 13040 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13336108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196836 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.802 17:19:06 -- setup/common.sh@33 -- # echo 0 00:02:39.802 17:19:06 -- setup/common.sh@33 -- # return 0 00:02:39.802 17:19:06 -- setup/hugepages.sh@99 -- # surp=0 00:02:39.802 17:19:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:39.802 17:19:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:39.802 17:19:06 -- setup/common.sh@18 -- # local node= 00:02:39.802 17:19:06 -- setup/common.sh@19 -- # local var val 00:02:39.802 17:19:06 -- setup/common.sh@20 -- # local mem_f mem 00:02:39.802 17:19:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.802 17:19:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.802 17:19:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.802 17:19:06 -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.802 17:19:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40119940 kB' 'MemAvailable: 43796400 kB' 'Buffers: 2696 kB' 'Cached: 15803100 kB' 'SwapCached: 0 kB' 'Active: 12764536 kB' 'Inactive: 3488624 kB' 'Active(anon): 12179820 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450612 kB' 'Mapped: 174200 kB' 'Shmem: 11732456 kB' 'KReclaimable: 193920 kB' 'Slab: 561588 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367668 kB' 'KernelStack: 13040 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13336124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196836 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.802 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.802 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:39.803 17:19:06 -- setup/common.sh@33 -- # echo 0 00:02:39.803 17:19:06 -- setup/common.sh@33 -- # return 0 00:02:39.803 17:19:06 -- setup/hugepages.sh@100 -- # resv=0 00:02:39.803 17:19:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:39.803 nr_hugepages=1024 00:02:39.803 17:19:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:39.803 resv_hugepages=0 00:02:39.803 17:19:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:39.803 surplus_hugepages=0 00:02:39.803 17:19:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:39.803 anon_hugepages=0 00:02:39.803 17:19:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:39.803 17:19:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:39.803 17:19:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:39.803 17:19:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:39.803 17:19:06 -- setup/common.sh@18 -- # local node= 00:02:39.803 17:19:06 -- setup/common.sh@19 -- # local var val 00:02:39.803 17:19:06 -- setup/common.sh@20 -- # local mem_f mem 00:02:39.803 17:19:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.803 17:19:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:39.803 17:19:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:39.803 17:19:06 -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.803 17:19:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40119940 kB' 'MemAvailable: 43796400 kB' 'Buffers: 2696 kB' 'Cached: 15803112 kB' 'SwapCached: 0 kB' 'Active: 12764508 kB' 'Inactive: 3488624 kB' 'Active(anon): 12179792 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 450576 kB' 'Mapped: 174200 kB' 'Shmem: 11732468 kB' 'KReclaimable: 193920 kB' 'Slab: 561588 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367668 kB' 'KernelStack: 13024 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13336136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196836 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.803 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.803 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:39.804 17:19:06 -- setup/common.sh@33 -- # echo 1024 00:02:39.804 17:19:06 -- setup/common.sh@33 -- # return 0 00:02:39.804 17:19:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:39.804 17:19:06 -- setup/hugepages.sh@112 -- # get_nodes 00:02:39.804 17:19:06 -- setup/hugepages.sh@27 -- # local node 00:02:39.804 17:19:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.804 17:19:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:39.804 17:19:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:39.804 17:19:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:39.804 17:19:06 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:39.804 17:19:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:39.804 17:19:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:39.804 17:19:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:39.804 17:19:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:39.804 17:19:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.804 17:19:06 -- setup/common.sh@18 -- # local node=0 00:02:39.804 17:19:06 -- setup/common.sh@19 -- # local var val 00:02:39.804 17:19:06 -- setup/common.sh@20 -- # local mem_f mem 00:02:39.804 17:19:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.804 17:19:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:39.804 17:19:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:39.804 17:19:06 -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.804 17:19:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20186608 kB' 'MemUsed: 12690332 kB' 'SwapCached: 0 kB' 'Active: 7066088 kB' 'Inactive: 3327648 kB' 'Active(anon): 6833612 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3327648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10236896 kB' 'Mapped: 108012 kB' 'AnonPages: 160036 kB' 'Shmem: 6676772 kB' 'KernelStack: 7752 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104008 kB' 'Slab: 332680 kB' 'SReclaimable: 104008 kB' 'SUnreclaim: 228672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.804 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.804 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@33 -- # echo 0 00:02:39.805 17:19:06 -- setup/common.sh@33 -- # return 0 00:02:39.805 17:19:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:39.805 17:19:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:39.805 17:19:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:39.805 17:19:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:39.805 17:19:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:39.805 17:19:06 -- setup/common.sh@18 -- # local node=1 00:02:39.805 17:19:06 -- setup/common.sh@19 -- # local var val 00:02:39.805 17:19:06 -- setup/common.sh@20 -- # local mem_f mem 00:02:39.805 17:19:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:39.805 17:19:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:39.805 17:19:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:39.805 17:19:06 -- setup/common.sh@28 -- # mapfile -t mem 00:02:39.805 17:19:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664776 kB' 'MemFree: 19933332 kB' 'MemUsed: 7731444 kB' 'SwapCached: 0 kB' 'Active: 5698160 kB' 'Inactive: 160976 kB' 'Active(anon): 5345920 kB' 'Inactive(anon): 0 kB' 'Active(file): 352240 kB' 'Inactive(file): 160976 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5568912 kB' 'Mapped: 66188 kB' 'AnonPages: 290280 kB' 'Shmem: 5055696 kB' 'KernelStack: 5288 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89912 kB' 'Slab: 228908 kB' 'SReclaimable: 89912 kB' 'SUnreclaim: 138996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # continue 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # IFS=': ' 00:02:39.805 17:19:06 -- setup/common.sh@31 -- # read -r var val _ 00:02:39.805 17:19:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:39.805 17:19:06 -- setup/common.sh@33 -- # echo 0 00:02:39.805 17:19:06 -- setup/common.sh@33 -- # return 0 00:02:39.805 17:19:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:39.805 17:19:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:39.805 17:19:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:39.805 17:19:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:39.805 17:19:06 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:39.805 node0=512 expecting 512 00:02:39.805 17:19:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:39.805 17:19:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:39.805 17:19:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:39.805 17:19:06 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:39.805 node1=512 expecting 512 00:02:39.805 17:19:06 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:39.805 00:02:39.805 real 0m1.402s 00:02:39.805 user 0m0.569s 00:02:39.805 sys 0m0.796s 00:02:39.805 17:19:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:39.805 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:02:39.805 ************************************ 00:02:39.805 END TEST even_2G_alloc 00:02:39.805 ************************************ 00:02:39.805 17:19:06 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:39.805 17:19:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:39.805 17:19:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:39.805 17:19:06 -- common/autotest_common.sh@10 -- # set +x 00:02:39.805 ************************************ 00:02:39.805 START TEST odd_alloc 00:02:39.805 ************************************ 00:02:39.805 17:19:06 -- common/autotest_common.sh@1111 -- # odd_alloc 00:02:39.805 17:19:06 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:39.805 17:19:06 -- setup/hugepages.sh@49 -- # local size=2098176 00:02:39.805 17:19:06 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:39.805 17:19:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:39.805 17:19:06 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:39.805 17:19:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:39.805 17:19:06 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:39.805 17:19:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:39.805 17:19:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:39.805 17:19:06 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:39.805 17:19:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:39.805 17:19:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:39.805 17:19:06 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:39.805 17:19:06 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:39.805 17:19:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.805 17:19:06 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:39.805 17:19:06 -- setup/hugepages.sh@83 -- # : 513 00:02:39.805 17:19:06 -- setup/hugepages.sh@84 -- # : 1 00:02:39.805 17:19:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.805 17:19:06 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:39.805 17:19:06 -- setup/hugepages.sh@83 -- # : 0 00:02:39.805 17:19:06 -- setup/hugepages.sh@84 -- # : 0 00:02:39.805 17:19:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:39.805 17:19:06 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:39.805 17:19:06 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:39.805 17:19:06 -- setup/hugepages.sh@160 -- # setup output 00:02:39.805 17:19:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:39.805 17:19:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:41.191 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.191 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:41.191 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.191 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.191 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.191 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.191 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.191 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.191 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.191 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:41.191 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:41.191 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:41.191 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:41.191 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:41.191 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:41.191 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:41.191 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:41.191 17:19:07 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:41.191 17:19:07 -- setup/hugepages.sh@89 -- # local node 00:02:41.191 17:19:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:41.191 17:19:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:41.191 17:19:07 -- setup/hugepages.sh@92 -- # local surp 00:02:41.191 17:19:07 -- setup/hugepages.sh@93 -- # local resv 00:02:41.191 17:19:07 -- setup/hugepages.sh@94 -- # local anon 00:02:41.191 17:19:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:41.191 17:19:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:41.191 17:19:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:41.191 17:19:07 -- setup/common.sh@18 -- # local node= 00:02:41.191 17:19:07 -- setup/common.sh@19 -- # local var val 00:02:41.191 17:19:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.191 17:19:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.191 17:19:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.191 17:19:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.191 17:19:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.191 17:19:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.191 17:19:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40108152 kB' 'MemAvailable: 43784612 kB' 'Buffers: 2696 kB' 'Cached: 15803184 kB' 'SwapCached: 0 kB' 'Active: 12761896 kB' 'Inactive: 3488624 kB' 'Active(anon): 12177180 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447888 kB' 'Mapped: 173484 kB' 'Shmem: 11732540 kB' 'KReclaimable: 193920 kB' 'Slab: 561208 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367288 kB' 'KernelStack: 12864 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 13305848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.191 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.191 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.192 17:19:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.192 17:19:07 -- setup/common.sh@33 -- # echo 0 00:02:41.192 17:19:07 -- setup/common.sh@33 -- # return 0 00:02:41.192 17:19:07 -- setup/hugepages.sh@97 -- # anon=0 00:02:41.192 17:19:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:41.192 17:19:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.192 17:19:07 -- setup/common.sh@18 -- # local node= 00:02:41.192 17:19:07 -- setup/common.sh@19 -- # local var val 00:02:41.192 17:19:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.192 17:19:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.192 17:19:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.192 17:19:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.192 17:19:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.192 17:19:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.192 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40108780 kB' 'MemAvailable: 43785240 kB' 'Buffers: 2696 kB' 'Cached: 15803188 kB' 'SwapCached: 0 kB' 'Active: 12763776 kB' 'Inactive: 3488624 kB' 'Active(anon): 12179060 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 449824 kB' 'Mapped: 174180 kB' 'Shmem: 11732544 kB' 'KReclaimable: 193920 kB' 'Slab: 561184 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367264 kB' 'KernelStack: 12880 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 13306928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196632 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.193 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.193 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.194 17:19:07 -- setup/common.sh@33 -- # echo 0 00:02:41.194 17:19:07 -- setup/common.sh@33 -- # return 0 00:02:41.194 17:19:07 -- setup/hugepages.sh@99 -- # surp=0 00:02:41.194 17:19:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:41.194 17:19:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:41.194 17:19:07 -- setup/common.sh@18 -- # local node= 00:02:41.194 17:19:07 -- setup/common.sh@19 -- # local var val 00:02:41.194 17:19:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.194 17:19:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.194 17:19:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.194 17:19:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.194 17:19:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.194 17:19:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40109012 kB' 'MemAvailable: 43785472 kB' 'Buffers: 2696 kB' 'Cached: 15803200 kB' 'SwapCached: 0 kB' 'Active: 12759624 kB' 'Inactive: 3488624 kB' 'Active(anon): 12174908 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 445652 kB' 'Mapped: 173724 kB' 'Shmem: 11732556 kB' 'KReclaimable: 193920 kB' 'Slab: 561184 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367264 kB' 'KernelStack: 12896 kB' 'PageTables: 7688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 13303896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196692 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.194 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.194 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.195 17:19:07 -- setup/common.sh@33 -- # echo 0 00:02:41.195 17:19:07 -- setup/common.sh@33 -- # return 0 00:02:41.195 17:19:07 -- setup/hugepages.sh@100 -- # resv=0 00:02:41.195 17:19:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:41.195 nr_hugepages=1025 00:02:41.195 17:19:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:41.195 resv_hugepages=0 00:02:41.195 17:19:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:41.195 surplus_hugepages=0 00:02:41.195 17:19:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:41.195 anon_hugepages=0 00:02:41.195 17:19:07 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:41.195 17:19:07 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:41.195 17:19:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:41.195 17:19:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:41.195 17:19:07 -- setup/common.sh@18 -- # local node= 00:02:41.195 17:19:07 -- setup/common.sh@19 -- # local var val 00:02:41.195 17:19:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.195 17:19:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.195 17:19:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.195 17:19:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.195 17:19:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.195 17:19:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40109592 kB' 'MemAvailable: 43786052 kB' 'Buffers: 2696 kB' 'Cached: 15803204 kB' 'SwapCached: 0 kB' 'Active: 12762236 kB' 'Inactive: 3488624 kB' 'Active(anon): 12177520 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 448196 kB' 'Mapped: 173648 kB' 'Shmem: 11732560 kB' 'KReclaimable: 193920 kB' 'Slab: 561160 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367240 kB' 'KernelStack: 12896 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 13306960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.195 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.195 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.196 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.196 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.197 17:19:07 -- setup/common.sh@33 -- # echo 1025 00:02:41.197 17:19:07 -- setup/common.sh@33 -- # return 0 00:02:41.197 17:19:07 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:41.197 17:19:07 -- setup/hugepages.sh@112 -- # get_nodes 00:02:41.197 17:19:07 -- setup/hugepages.sh@27 -- # local node 00:02:41.197 17:19:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.197 17:19:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:41.197 17:19:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.197 17:19:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:41.197 17:19:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:41.197 17:19:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:41.197 17:19:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.197 17:19:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.197 17:19:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:41.197 17:19:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.197 17:19:07 -- setup/common.sh@18 -- # local node=0 00:02:41.197 17:19:07 -- setup/common.sh@19 -- # local var val 00:02:41.197 17:19:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.197 17:19:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.197 17:19:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:41.197 17:19:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:41.197 17:19:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.197 17:19:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20189524 kB' 'MemUsed: 12687416 kB' 'SwapCached: 0 kB' 'Active: 7063816 kB' 'Inactive: 3327648 kB' 'Active(anon): 6831340 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3327648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10236976 kB' 'Mapped: 107148 kB' 'AnonPages: 157608 kB' 'Shmem: 6676852 kB' 'KernelStack: 7768 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104008 kB' 'Slab: 332496 kB' 'SReclaimable: 104008 kB' 'SUnreclaim: 228488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.197 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.197 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@33 -- # echo 0 00:02:41.198 17:19:07 -- setup/common.sh@33 -- # return 0 00:02:41.198 17:19:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.198 17:19:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.198 17:19:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.198 17:19:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:41.198 17:19:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.198 17:19:07 -- setup/common.sh@18 -- # local node=1 00:02:41.198 17:19:07 -- setup/common.sh@19 -- # local var val 00:02:41.198 17:19:07 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.198 17:19:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.198 17:19:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:41.198 17:19:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:41.198 17:19:07 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.198 17:19:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664776 kB' 'MemFree: 19920504 kB' 'MemUsed: 7744272 kB' 'SwapCached: 0 kB' 'Active: 5693508 kB' 'Inactive: 160976 kB' 'Active(anon): 5341268 kB' 'Inactive(anon): 0 kB' 'Active(file): 352240 kB' 'Inactive(file): 160976 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5568928 kB' 'Mapped: 66188 kB' 'AnonPages: 285720 kB' 'Shmem: 5055712 kB' 'KernelStack: 5160 kB' 'PageTables: 3276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89912 kB' 'Slab: 228664 kB' 'SReclaimable: 89912 kB' 'SUnreclaim: 138752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.198 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.198 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # continue 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.199 17:19:07 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.199 17:19:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.199 17:19:07 -- setup/common.sh@33 -- # echo 0 00:02:41.199 17:19:07 -- setup/common.sh@33 -- # return 0 00:02:41.199 17:19:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.199 17:19:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.199 17:19:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.199 17:19:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.199 17:19:07 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:41.199 node0=512 expecting 513 00:02:41.199 17:19:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.199 17:19:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.199 17:19:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.199 17:19:07 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:41.199 node1=513 expecting 512 00:02:41.199 17:19:07 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:41.199 00:02:41.199 real 0m1.388s 00:02:41.199 user 0m0.607s 00:02:41.199 sys 0m0.746s 00:02:41.199 17:19:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:41.199 17:19:07 -- common/autotest_common.sh@10 -- # set +x 00:02:41.199 ************************************ 00:02:41.199 END TEST odd_alloc 00:02:41.199 ************************************ 00:02:41.458 17:19:07 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:41.458 17:19:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:41.458 17:19:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:41.458 17:19:07 -- common/autotest_common.sh@10 -- # set +x 00:02:41.458 ************************************ 00:02:41.458 START TEST custom_alloc 00:02:41.458 ************************************ 00:02:41.458 17:19:07 -- common/autotest_common.sh@1111 -- # custom_alloc 00:02:41.458 17:19:07 -- setup/hugepages.sh@167 -- # local IFS=, 00:02:41.458 17:19:07 -- setup/hugepages.sh@169 -- # local node 00:02:41.458 17:19:07 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:41.458 17:19:07 -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:41.458 17:19:07 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:41.458 17:19:07 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:41.458 17:19:07 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:41.458 17:19:07 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:41.458 17:19:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:41.458 17:19:07 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:41.458 17:19:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:41.458 17:19:07 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:41.458 17:19:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.458 17:19:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:41.458 17:19:07 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.458 17:19:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.458 17:19:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.458 17:19:07 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:41.458 17:19:07 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:41.458 17:19:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.458 17:19:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:41.458 17:19:07 -- setup/hugepages.sh@83 -- # : 256 00:02:41.458 17:19:07 -- setup/hugepages.sh@84 -- # : 1 00:02:41.458 17:19:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.458 17:19:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:41.458 17:19:07 -- setup/hugepages.sh@83 -- # : 0 00:02:41.458 17:19:07 -- setup/hugepages.sh@84 -- # : 0 00:02:41.458 17:19:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:41.459 17:19:07 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:41.459 17:19:07 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:41.459 17:19:07 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:41.459 17:19:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:41.459 17:19:07 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:41.459 17:19:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:41.459 17:19:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:41.459 17:19:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:41.459 17:19:07 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:41.459 17:19:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.459 17:19:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:41.459 17:19:07 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.459 17:19:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.459 17:19:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.459 17:19:07 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:41.459 17:19:07 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:41.459 17:19:07 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:41.459 17:19:07 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:41.459 17:19:07 -- setup/hugepages.sh@78 -- # return 0 00:02:41.459 17:19:07 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:41.459 17:19:07 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:41.459 17:19:07 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:41.459 17:19:07 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:41.459 17:19:07 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:41.459 17:19:07 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:41.459 17:19:07 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:41.459 17:19:07 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:41.459 17:19:07 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:41.459 17:19:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.459 17:19:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:41.459 17:19:07 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.459 17:19:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.459 17:19:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.459 17:19:07 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:41.459 17:19:07 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:41.459 17:19:07 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:41.459 17:19:07 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:41.459 17:19:07 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:41.459 17:19:07 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:41.459 17:19:07 -- setup/hugepages.sh@78 -- # return 0 00:02:41.459 17:19:07 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:41.459 17:19:07 -- setup/hugepages.sh@187 -- # setup output 00:02:41.459 17:19:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.459 17:19:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:42.840 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:42.840 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:42.840 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:42.840 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:42.840 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:42.840 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:42.840 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:42.840 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:42.840 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:42.840 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:42.840 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:42.840 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:42.840 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:42.840 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:42.840 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:42.840 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:42.840 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:42.840 17:19:09 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:42.840 17:19:09 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:42.840 17:19:09 -- setup/hugepages.sh@89 -- # local node 00:02:42.840 17:19:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:42.840 17:19:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:42.840 17:19:09 -- setup/hugepages.sh@92 -- # local surp 00:02:42.840 17:19:09 -- setup/hugepages.sh@93 -- # local resv 00:02:42.840 17:19:09 -- setup/hugepages.sh@94 -- # local anon 00:02:42.841 17:19:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:42.841 17:19:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:42.841 17:19:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:42.841 17:19:09 -- setup/common.sh@18 -- # local node= 00:02:42.841 17:19:09 -- setup/common.sh@19 -- # local var val 00:02:42.841 17:19:09 -- setup/common.sh@20 -- # local mem_f mem 00:02:42.841 17:19:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.841 17:19:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.841 17:19:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.841 17:19:09 -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.841 17:19:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 39050304 kB' 'MemAvailable: 42726764 kB' 'Buffers: 2696 kB' 'Cached: 15803280 kB' 'SwapCached: 0 kB' 'Active: 12756788 kB' 'Inactive: 3488624 kB' 'Active(anon): 12172072 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 442640 kB' 'Mapped: 173236 kB' 'Shmem: 11732636 kB' 'KReclaimable: 193920 kB' 'Slab: 560876 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 366956 kB' 'KernelStack: 12880 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 13300404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196788 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.841 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.841 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:42.842 17:19:09 -- setup/common.sh@33 -- # echo 0 00:02:42.842 17:19:09 -- setup/common.sh@33 -- # return 0 00:02:42.842 17:19:09 -- setup/hugepages.sh@97 -- # anon=0 00:02:42.842 17:19:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:42.842 17:19:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.842 17:19:09 -- setup/common.sh@18 -- # local node= 00:02:42.842 17:19:09 -- setup/common.sh@19 -- # local var val 00:02:42.842 17:19:09 -- setup/common.sh@20 -- # local mem_f mem 00:02:42.842 17:19:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.842 17:19:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.842 17:19:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.842 17:19:09 -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.842 17:19:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 39058484 kB' 'MemAvailable: 42734944 kB' 'Buffers: 2696 kB' 'Cached: 15803284 kB' 'SwapCached: 0 kB' 'Active: 12757340 kB' 'Inactive: 3488624 kB' 'Active(anon): 12172624 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 443360 kB' 'Mapped: 173288 kB' 'Shmem: 11732640 kB' 'KReclaimable: 193920 kB' 'Slab: 560888 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 366968 kB' 'KernelStack: 12944 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 13300416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.842 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.842 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.843 17:19:09 -- setup/common.sh@33 -- # echo 0 00:02:42.843 17:19:09 -- setup/common.sh@33 -- # return 0 00:02:42.843 17:19:09 -- setup/hugepages.sh@99 -- # surp=0 00:02:42.843 17:19:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:42.843 17:19:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:42.843 17:19:09 -- setup/common.sh@18 -- # local node= 00:02:42.843 17:19:09 -- setup/common.sh@19 -- # local var val 00:02:42.843 17:19:09 -- setup/common.sh@20 -- # local mem_f mem 00:02:42.843 17:19:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.843 17:19:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.843 17:19:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.843 17:19:09 -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.843 17:19:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 39058716 kB' 'MemAvailable: 42735176 kB' 'Buffers: 2696 kB' 'Cached: 15803296 kB' 'SwapCached: 0 kB' 'Active: 12757272 kB' 'Inactive: 3488624 kB' 'Active(anon): 12172556 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 443196 kB' 'Mapped: 173212 kB' 'Shmem: 11732652 kB' 'KReclaimable: 193920 kB' 'Slab: 560912 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 366992 kB' 'KernelStack: 12944 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 13300432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.843 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.843 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.844 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.844 17:19:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:42.845 17:19:09 -- setup/common.sh@33 -- # echo 0 00:02:42.845 17:19:09 -- setup/common.sh@33 -- # return 0 00:02:42.845 17:19:09 -- setup/hugepages.sh@100 -- # resv=0 00:02:42.845 17:19:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:42.845 nr_hugepages=1536 00:02:42.845 17:19:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:42.845 resv_hugepages=0 00:02:42.845 17:19:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:42.845 surplus_hugepages=0 00:02:42.845 17:19:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:42.845 anon_hugepages=0 00:02:42.845 17:19:09 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:42.845 17:19:09 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:42.845 17:19:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:42.845 17:19:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:42.845 17:19:09 -- setup/common.sh@18 -- # local node= 00:02:42.845 17:19:09 -- setup/common.sh@19 -- # local var val 00:02:42.845 17:19:09 -- setup/common.sh@20 -- # local mem_f mem 00:02:42.845 17:19:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.845 17:19:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:42.845 17:19:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:42.845 17:19:09 -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.845 17:19:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 39058212 kB' 'MemAvailable: 42734672 kB' 'Buffers: 2696 kB' 'Cached: 15803308 kB' 'SwapCached: 0 kB' 'Active: 12757052 kB' 'Inactive: 3488624 kB' 'Active(anon): 12172336 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 442944 kB' 'Mapped: 173212 kB' 'Shmem: 11732664 kB' 'KReclaimable: 193920 kB' 'Slab: 560912 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 366992 kB' 'KernelStack: 12928 kB' 'PageTables: 7700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 13300448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.845 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.845 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:42.846 17:19:09 -- setup/common.sh@33 -- # echo 1536 00:02:42.846 17:19:09 -- setup/common.sh@33 -- # return 0 00:02:42.846 17:19:09 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:42.846 17:19:09 -- setup/hugepages.sh@112 -- # get_nodes 00:02:42.846 17:19:09 -- setup/hugepages.sh@27 -- # local node 00:02:42.846 17:19:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.846 17:19:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:42.846 17:19:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:42.846 17:19:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:42.846 17:19:09 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:42.846 17:19:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:42.846 17:19:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:42.846 17:19:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:42.846 17:19:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:42.846 17:19:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.846 17:19:09 -- setup/common.sh@18 -- # local node=0 00:02:42.846 17:19:09 -- setup/common.sh@19 -- # local var val 00:02:42.846 17:19:09 -- setup/common.sh@20 -- # local mem_f mem 00:02:42.846 17:19:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.846 17:19:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:42.846 17:19:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:42.846 17:19:09 -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.846 17:19:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20188720 kB' 'MemUsed: 12688220 kB' 'SwapCached: 0 kB' 'Active: 7063044 kB' 'Inactive: 3327648 kB' 'Active(anon): 6830568 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3327648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10237080 kB' 'Mapped: 107024 kB' 'AnonPages: 156768 kB' 'Shmem: 6676956 kB' 'KernelStack: 7752 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104008 kB' 'Slab: 332408 kB' 'SReclaimable: 104008 kB' 'SUnreclaim: 228400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.846 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.846 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@33 -- # echo 0 00:02:42.847 17:19:09 -- setup/common.sh@33 -- # return 0 00:02:42.847 17:19:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:42.847 17:19:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:42.847 17:19:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:42.847 17:19:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:42.847 17:19:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:42.847 17:19:09 -- setup/common.sh@18 -- # local node=1 00:02:42.847 17:19:09 -- setup/common.sh@19 -- # local var val 00:02:42.847 17:19:09 -- setup/common.sh@20 -- # local mem_f mem 00:02:42.847 17:19:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:42.847 17:19:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:42.847 17:19:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:42.847 17:19:09 -- setup/common.sh@28 -- # mapfile -t mem 00:02:42.847 17:19:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664776 kB' 'MemFree: 18869764 kB' 'MemUsed: 8795012 kB' 'SwapCached: 0 kB' 'Active: 5694220 kB' 'Inactive: 160976 kB' 'Active(anon): 5341980 kB' 'Inactive(anon): 0 kB' 'Active(file): 352240 kB' 'Inactive(file): 160976 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5568940 kB' 'Mapped: 66188 kB' 'AnonPages: 286396 kB' 'Shmem: 5055724 kB' 'KernelStack: 5176 kB' 'PageTables: 3296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 89912 kB' 'Slab: 228504 kB' 'SReclaimable: 89912 kB' 'SUnreclaim: 138592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.847 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.847 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # continue 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # IFS=': ' 00:02:42.848 17:19:09 -- setup/common.sh@31 -- # read -r var val _ 00:02:42.848 17:19:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:42.848 17:19:09 -- setup/common.sh@33 -- # echo 0 00:02:42.848 17:19:09 -- setup/common.sh@33 -- # return 0 00:02:42.848 17:19:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:42.848 17:19:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:42.848 17:19:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:42.848 17:19:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:42.848 17:19:09 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:42.848 node0=512 expecting 512 00:02:42.848 17:19:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:42.848 17:19:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:42.848 17:19:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:42.848 17:19:09 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:42.848 node1=1024 expecting 1024 00:02:42.848 17:19:09 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:42.848 00:02:42.848 real 0m1.441s 00:02:42.848 user 0m0.573s 00:02:42.848 sys 0m0.829s 00:02:42.848 17:19:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:42.848 17:19:09 -- common/autotest_common.sh@10 -- # set +x 00:02:42.848 ************************************ 00:02:42.848 END TEST custom_alloc 00:02:42.848 ************************************ 00:02:42.848 17:19:09 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:42.848 17:19:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:42.848 17:19:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:42.848 17:19:09 -- common/autotest_common.sh@10 -- # set +x 00:02:43.107 ************************************ 00:02:43.107 START TEST no_shrink_alloc 00:02:43.107 ************************************ 00:02:43.107 17:19:09 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:02:43.107 17:19:09 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:43.107 17:19:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:43.107 17:19:09 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:43.107 17:19:09 -- setup/hugepages.sh@51 -- # shift 00:02:43.107 17:19:09 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:43.107 17:19:09 -- setup/hugepages.sh@52 -- # local node_ids 00:02:43.107 17:19:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:43.107 17:19:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:43.107 17:19:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:43.107 17:19:09 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:43.107 17:19:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:43.107 17:19:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:43.107 17:19:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:43.107 17:19:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:43.107 17:19:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:43.107 17:19:09 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:43.107 17:19:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:43.107 17:19:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:43.107 17:19:09 -- setup/hugepages.sh@73 -- # return 0 00:02:43.107 17:19:09 -- setup/hugepages.sh@198 -- # setup output 00:02:43.107 17:19:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.107 17:19:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:44.045 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.045 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:44.045 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.045 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.045 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.045 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.045 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.045 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.045 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.045 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:44.045 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:44.045 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:44.045 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:44.045 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:44.045 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:44.045 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:44.045 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:44.307 17:19:10 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:44.307 17:19:10 -- setup/hugepages.sh@89 -- # local node 00:02:44.307 17:19:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:44.307 17:19:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:44.307 17:19:10 -- setup/hugepages.sh@92 -- # local surp 00:02:44.307 17:19:10 -- setup/hugepages.sh@93 -- # local resv 00:02:44.307 17:19:10 -- setup/hugepages.sh@94 -- # local anon 00:02:44.307 17:19:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:44.307 17:19:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:44.307 17:19:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:44.307 17:19:10 -- setup/common.sh@18 -- # local node= 00:02:44.307 17:19:10 -- setup/common.sh@19 -- # local var val 00:02:44.307 17:19:10 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.307 17:19:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.307 17:19:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.307 17:19:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.307 17:19:10 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.307 17:19:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.307 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.307 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.307 17:19:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40070244 kB' 'MemAvailable: 43746704 kB' 'Buffers: 2696 kB' 'Cached: 15803376 kB' 'SwapCached: 0 kB' 'Active: 12757716 kB' 'Inactive: 3488624 kB' 'Active(anon): 12173000 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 443488 kB' 'Mapped: 173376 kB' 'Shmem: 11732732 kB' 'KReclaimable: 193920 kB' 'Slab: 560896 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 366976 kB' 'KernelStack: 12912 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13300880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:44.307 17:19:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.307 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.307 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.307 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.307 17:19:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.307 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.307 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.307 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.307 17:19:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.307 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.307 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.307 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.307 17:19:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.307 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.307 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.307 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.307 17:19:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.307 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.307 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.307 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.307 17:19:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.308 17:19:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:44.308 17:19:10 -- setup/common.sh@33 -- # echo 0 00:02:44.308 17:19:10 -- setup/common.sh@33 -- # return 0 00:02:44.308 17:19:10 -- setup/hugepages.sh@97 -- # anon=0 00:02:44.308 17:19:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:44.308 17:19:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.308 17:19:10 -- setup/common.sh@18 -- # local node= 00:02:44.308 17:19:10 -- setup/common.sh@19 -- # local var val 00:02:44.308 17:19:10 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.308 17:19:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.308 17:19:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.308 17:19:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.308 17:19:10 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.308 17:19:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.308 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40069992 kB' 'MemAvailable: 43746452 kB' 'Buffers: 2696 kB' 'Cached: 15803384 kB' 'SwapCached: 0 kB' 'Active: 12758348 kB' 'Inactive: 3488624 kB' 'Active(anon): 12173632 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 444200 kB' 'Mapped: 173376 kB' 'Shmem: 11732740 kB' 'KReclaimable: 193920 kB' 'Slab: 560896 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 366976 kB' 'KernelStack: 12976 kB' 'PageTables: 7764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13301256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.309 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.309 17:19:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.310 17:19:10 -- setup/common.sh@33 -- # echo 0 00:02:44.310 17:19:10 -- setup/common.sh@33 -- # return 0 00:02:44.310 17:19:10 -- setup/hugepages.sh@99 -- # surp=0 00:02:44.310 17:19:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:44.310 17:19:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:44.310 17:19:10 -- setup/common.sh@18 -- # local node= 00:02:44.310 17:19:10 -- setup/common.sh@19 -- # local var val 00:02:44.310 17:19:10 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.310 17:19:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.310 17:19:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.310 17:19:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.310 17:19:10 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.310 17:19:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40069720 kB' 'MemAvailable: 43746180 kB' 'Buffers: 2696 kB' 'Cached: 15803392 kB' 'SwapCached: 0 kB' 'Active: 12757416 kB' 'Inactive: 3488624 kB' 'Active(anon): 12172700 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 443244 kB' 'Mapped: 173344 kB' 'Shmem: 11732748 kB' 'KReclaimable: 193920 kB' 'Slab: 560896 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 366976 kB' 'KernelStack: 12928 kB' 'PageTables: 7592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13301272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.310 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.310 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:44.311 17:19:10 -- setup/common.sh@33 -- # echo 0 00:02:44.311 17:19:10 -- setup/common.sh@33 -- # return 0 00:02:44.311 17:19:10 -- setup/hugepages.sh@100 -- # resv=0 00:02:44.311 17:19:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:44.311 nr_hugepages=1024 00:02:44.311 17:19:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:44.311 resv_hugepages=0 00:02:44.311 17:19:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:44.311 surplus_hugepages=0 00:02:44.311 17:19:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:44.311 anon_hugepages=0 00:02:44.311 17:19:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.311 17:19:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:44.311 17:19:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:44.311 17:19:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:44.311 17:19:10 -- setup/common.sh@18 -- # local node= 00:02:44.311 17:19:10 -- setup/common.sh@19 -- # local var val 00:02:44.311 17:19:10 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.311 17:19:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.311 17:19:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.311 17:19:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.311 17:19:10 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.311 17:19:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40069972 kB' 'MemAvailable: 43746432 kB' 'Buffers: 2696 kB' 'Cached: 15803412 kB' 'SwapCached: 0 kB' 'Active: 12757576 kB' 'Inactive: 3488624 kB' 'Active(anon): 12172860 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 443388 kB' 'Mapped: 173268 kB' 'Shmem: 11732768 kB' 'KReclaimable: 193920 kB' 'Slab: 560900 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 366980 kB' 'KernelStack: 12976 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13301288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.311 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.311 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.312 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.312 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:44.313 17:19:10 -- setup/common.sh@33 -- # echo 1024 00:02:44.313 17:19:10 -- setup/common.sh@33 -- # return 0 00:02:44.313 17:19:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:44.313 17:19:10 -- setup/hugepages.sh@112 -- # get_nodes 00:02:44.313 17:19:10 -- setup/hugepages.sh@27 -- # local node 00:02:44.313 17:19:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.313 17:19:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:44.313 17:19:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.313 17:19:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:44.313 17:19:10 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:44.313 17:19:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:44.313 17:19:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:44.313 17:19:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:44.313 17:19:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:44.313 17:19:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:44.313 17:19:10 -- setup/common.sh@18 -- # local node=0 00:02:44.313 17:19:10 -- setup/common.sh@19 -- # local var val 00:02:44.313 17:19:10 -- setup/common.sh@20 -- # local mem_f mem 00:02:44.313 17:19:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.313 17:19:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:44.313 17:19:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:44.313 17:19:10 -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.313 17:19:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19137980 kB' 'MemUsed: 13738960 kB' 'SwapCached: 0 kB' 'Active: 7063076 kB' 'Inactive: 3327648 kB' 'Active(anon): 6830600 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3327648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10237156 kB' 'Mapped: 107080 kB' 'AnonPages: 156740 kB' 'Shmem: 6677032 kB' 'KernelStack: 7784 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104008 kB' 'Slab: 332508 kB' 'SReclaimable: 104008 kB' 'SUnreclaim: 228500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.313 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.313 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # continue 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # IFS=': ' 00:02:44.314 17:19:10 -- setup/common.sh@31 -- # read -r var val _ 00:02:44.314 17:19:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:44.314 17:19:10 -- setup/common.sh@33 -- # echo 0 00:02:44.314 17:19:10 -- setup/common.sh@33 -- # return 0 00:02:44.314 17:19:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:44.314 17:19:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:44.314 17:19:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:44.314 17:19:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:44.314 17:19:10 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:44.314 node0=1024 expecting 1024 00:02:44.314 17:19:10 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:44.314 17:19:10 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:44.314 17:19:10 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:44.314 17:19:10 -- setup/hugepages.sh@202 -- # setup output 00:02:44.314 17:19:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.314 17:19:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:02:45.704 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:45.704 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:45.704 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:45.704 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:45.704 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:45.704 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:45.704 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:45.704 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:45.704 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:45.704 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:45.704 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:45.704 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:45.704 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:45.704 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:45.704 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:45.704 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:45.704 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:45.704 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:45.704 17:19:11 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:45.704 17:19:11 -- setup/hugepages.sh@89 -- # local node 00:02:45.704 17:19:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:45.704 17:19:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:45.704 17:19:11 -- setup/hugepages.sh@92 -- # local surp 00:02:45.704 17:19:11 -- setup/hugepages.sh@93 -- # local resv 00:02:45.704 17:19:11 -- setup/hugepages.sh@94 -- # local anon 00:02:45.704 17:19:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:45.704 17:19:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:45.704 17:19:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:45.704 17:19:11 -- setup/common.sh@18 -- # local node= 00:02:45.704 17:19:11 -- setup/common.sh@19 -- # local var val 00:02:45.704 17:19:11 -- setup/common.sh@20 -- # local mem_f mem 00:02:45.704 17:19:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.704 17:19:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.704 17:19:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.704 17:19:11 -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.704 17:19:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.704 17:19:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40065884 kB' 'MemAvailable: 43742344 kB' 'Buffers: 2696 kB' 'Cached: 15803452 kB' 'SwapCached: 0 kB' 'Active: 12758900 kB' 'Inactive: 3488624 kB' 'Active(anon): 12174184 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 444504 kB' 'Mapped: 173296 kB' 'Shmem: 11732808 kB' 'KReclaimable: 193920 kB' 'Slab: 560892 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 366972 kB' 'KernelStack: 13296 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13302464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197028 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.704 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.704 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:45.705 17:19:11 -- setup/common.sh@33 -- # echo 0 00:02:45.705 17:19:11 -- setup/common.sh@33 -- # return 0 00:02:45.705 17:19:11 -- setup/hugepages.sh@97 -- # anon=0 00:02:45.705 17:19:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:45.705 17:19:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.705 17:19:11 -- setup/common.sh@18 -- # local node= 00:02:45.705 17:19:11 -- setup/common.sh@19 -- # local var val 00:02:45.705 17:19:11 -- setup/common.sh@20 -- # local mem_f mem 00:02:45.705 17:19:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.705 17:19:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.705 17:19:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.705 17:19:11 -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.705 17:19:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40068032 kB' 'MemAvailable: 43744492 kB' 'Buffers: 2696 kB' 'Cached: 15803452 kB' 'SwapCached: 0 kB' 'Active: 12759216 kB' 'Inactive: 3488624 kB' 'Active(anon): 12174500 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 444856 kB' 'Mapped: 173308 kB' 'Shmem: 11732808 kB' 'KReclaimable: 193920 kB' 'Slab: 560984 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367064 kB' 'KernelStack: 13296 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13303852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197044 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.705 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.705 17:19:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:11 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:11 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.706 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.706 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.707 17:19:12 -- setup/common.sh@33 -- # echo 0 00:02:45.707 17:19:12 -- setup/common.sh@33 -- # return 0 00:02:45.707 17:19:12 -- setup/hugepages.sh@99 -- # surp=0 00:02:45.707 17:19:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:45.707 17:19:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:45.707 17:19:12 -- setup/common.sh@18 -- # local node= 00:02:45.707 17:19:12 -- setup/common.sh@19 -- # local var val 00:02:45.707 17:19:12 -- setup/common.sh@20 -- # local mem_f mem 00:02:45.707 17:19:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.707 17:19:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.707 17:19:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.707 17:19:12 -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.707 17:19:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40067160 kB' 'MemAvailable: 43743620 kB' 'Buffers: 2696 kB' 'Cached: 15803456 kB' 'SwapCached: 0 kB' 'Active: 12759848 kB' 'Inactive: 3488624 kB' 'Active(anon): 12175132 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 445548 kB' 'Mapped: 173308 kB' 'Shmem: 11732812 kB' 'KReclaimable: 193920 kB' 'Slab: 560980 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367060 kB' 'KernelStack: 13584 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13303868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197188 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.707 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.707 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:45.708 17:19:12 -- setup/common.sh@33 -- # echo 0 00:02:45.708 17:19:12 -- setup/common.sh@33 -- # return 0 00:02:45.708 17:19:12 -- setup/hugepages.sh@100 -- # resv=0 00:02:45.708 17:19:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:45.708 nr_hugepages=1024 00:02:45.708 17:19:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:45.708 resv_hugepages=0 00:02:45.708 17:19:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:45.708 surplus_hugepages=0 00:02:45.708 17:19:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:45.708 anon_hugepages=0 00:02:45.708 17:19:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:45.708 17:19:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:45.708 17:19:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:45.708 17:19:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:45.708 17:19:12 -- setup/common.sh@18 -- # local node= 00:02:45.708 17:19:12 -- setup/common.sh@19 -- # local var val 00:02:45.708 17:19:12 -- setup/common.sh@20 -- # local mem_f mem 00:02:45.708 17:19:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.708 17:19:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.708 17:19:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.708 17:19:12 -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.708 17:19:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541716 kB' 'MemFree: 40069884 kB' 'MemAvailable: 43746344 kB' 'Buffers: 2696 kB' 'Cached: 15803484 kB' 'SwapCached: 0 kB' 'Active: 12757764 kB' 'Inactive: 3488624 kB' 'Active(anon): 12173048 kB' 'Inactive(anon): 0 kB' 'Active(file): 584716 kB' 'Inactive(file): 3488624 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 443408 kB' 'Mapped: 173272 kB' 'Shmem: 11732840 kB' 'KReclaimable: 193920 kB' 'Slab: 560980 kB' 'SReclaimable: 193920 kB' 'SUnreclaim: 367060 kB' 'KernelStack: 13008 kB' 'PageTables: 7764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 13301492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196852 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1756764 kB' 'DirectMap2M: 14940160 kB' 'DirectMap1G: 52428800 kB' 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.708 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.708 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.709 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.709 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:45.710 17:19:12 -- setup/common.sh@33 -- # echo 1024 00:02:45.710 17:19:12 -- setup/common.sh@33 -- # return 0 00:02:45.710 17:19:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:45.710 17:19:12 -- setup/hugepages.sh@112 -- # get_nodes 00:02:45.710 17:19:12 -- setup/hugepages.sh@27 -- # local node 00:02:45.710 17:19:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.710 17:19:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:45.710 17:19:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.710 17:19:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:45.710 17:19:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:45.710 17:19:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:45.710 17:19:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:45.710 17:19:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:45.710 17:19:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:45.710 17:19:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:45.710 17:19:12 -- setup/common.sh@18 -- # local node=0 00:02:45.710 17:19:12 -- setup/common.sh@19 -- # local var val 00:02:45.710 17:19:12 -- setup/common.sh@20 -- # local mem_f mem 00:02:45.710 17:19:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.710 17:19:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:45.710 17:19:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:45.710 17:19:12 -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.710 17:19:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19145240 kB' 'MemUsed: 13731700 kB' 'SwapCached: 0 kB' 'Active: 7062868 kB' 'Inactive: 3327648 kB' 'Active(anon): 6830392 kB' 'Inactive(anon): 0 kB' 'Active(file): 232476 kB' 'Inactive(file): 3327648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10237224 kB' 'Mapped: 107084 kB' 'AnonPages: 156468 kB' 'Shmem: 6677100 kB' 'KernelStack: 7832 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104008 kB' 'Slab: 332636 kB' 'SReclaimable: 104008 kB' 'SUnreclaim: 228628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.710 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.710 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # continue 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.711 17:19:12 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.711 17:19:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:45.711 17:19:12 -- setup/common.sh@33 -- # echo 0 00:02:45.711 17:19:12 -- setup/common.sh@33 -- # return 0 00:02:45.711 17:19:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:45.711 17:19:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:45.711 17:19:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:45.711 17:19:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:45.711 17:19:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:45.711 node0=1024 expecting 1024 00:02:45.711 17:19:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:45.711 00:02:45.711 real 0m2.689s 00:02:45.711 user 0m1.102s 00:02:45.711 sys 0m1.508s 00:02:45.711 17:19:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:45.711 17:19:12 -- common/autotest_common.sh@10 -- # set +x 00:02:45.711 ************************************ 00:02:45.711 END TEST no_shrink_alloc 00:02:45.711 ************************************ 00:02:45.711 17:19:12 -- setup/hugepages.sh@217 -- # clear_hp 00:02:45.711 17:19:12 -- setup/hugepages.sh@37 -- # local node hp 00:02:45.711 17:19:12 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:45.711 17:19:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.711 17:19:12 -- setup/hugepages.sh@41 -- # echo 0 00:02:45.711 17:19:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.711 17:19:12 -- setup/hugepages.sh@41 -- # echo 0 00:02:45.711 17:19:12 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:45.711 17:19:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.711 17:19:12 -- setup/hugepages.sh@41 -- # echo 0 00:02:45.711 17:19:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.711 17:19:12 -- setup/hugepages.sh@41 -- # echo 0 00:02:45.711 17:19:12 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:45.711 17:19:12 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:45.711 00:02:45.711 real 0m11.639s 00:02:45.711 user 0m4.377s 00:02:45.711 sys 0m5.958s 00:02:45.711 17:19:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:45.711 17:19:12 -- common/autotest_common.sh@10 -- # set +x 00:02:45.711 ************************************ 00:02:45.711 END TEST hugepages 00:02:45.711 ************************************ 00:02:45.711 17:19:12 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:02:45.711 17:19:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:45.711 17:19:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:45.711 17:19:12 -- common/autotest_common.sh@10 -- # set +x 00:02:45.711 ************************************ 00:02:45.711 START TEST driver 00:02:45.711 ************************************ 00:02:45.711 17:19:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:02:45.970 * Looking for test storage... 00:02:45.970 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:45.970 17:19:12 -- setup/driver.sh@68 -- # setup reset 00:02:45.970 17:19:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:45.970 17:19:12 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:48.503 17:19:14 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:48.503 17:19:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:48.503 17:19:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:48.503 17:19:14 -- common/autotest_common.sh@10 -- # set +x 00:02:48.503 ************************************ 00:02:48.503 START TEST guess_driver 00:02:48.503 ************************************ 00:02:48.503 17:19:14 -- common/autotest_common.sh@1111 -- # guess_driver 00:02:48.503 17:19:14 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:48.503 17:19:14 -- setup/driver.sh@47 -- # local fail=0 00:02:48.503 17:19:14 -- setup/driver.sh@49 -- # pick_driver 00:02:48.503 17:19:14 -- setup/driver.sh@36 -- # vfio 00:02:48.503 17:19:14 -- setup/driver.sh@21 -- # local iommu_grups 00:02:48.503 17:19:14 -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:48.503 17:19:14 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:48.503 17:19:14 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:48.503 17:19:14 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:48.503 17:19:14 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:02:48.503 17:19:14 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:48.503 17:19:14 -- setup/driver.sh@14 -- # mod vfio_pci 00:02:48.503 17:19:14 -- setup/driver.sh@12 -- # dep vfio_pci 00:02:48.503 17:19:14 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:48.503 17:19:14 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:48.503 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:48.503 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:48.503 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:48.503 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:48.503 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:48.503 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:48.503 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:48.503 17:19:14 -- setup/driver.sh@30 -- # return 0 00:02:48.503 17:19:14 -- setup/driver.sh@37 -- # echo vfio-pci 00:02:48.503 17:19:14 -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:48.503 17:19:14 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:48.503 17:19:14 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:48.503 Looking for driver=vfio-pci 00:02:48.504 17:19:14 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:48.504 17:19:14 -- setup/driver.sh@45 -- # setup output config 00:02:48.504 17:19:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.504 17:19:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:49.439 17:19:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.439 17:19:15 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.439 17:19:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.439 17:19:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.439 17:19:15 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.439 17:19:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.439 17:19:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.439 17:19:15 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.439 17:19:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.439 17:19:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.439 17:19:15 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.439 17:19:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.439 17:19:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.439 17:19:15 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.439 17:19:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.439 17:19:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.439 17:19:15 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.439 17:19:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.439 17:19:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.439 17:19:15 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.439 17:19:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.439 17:19:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.439 17:19:15 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.439 17:19:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.439 17:19:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.439 17:19:15 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.439 17:19:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.439 17:19:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.439 17:19:15 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.439 17:19:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.439 17:19:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.439 17:19:15 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.439 17:19:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.439 17:19:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.439 17:19:15 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.439 17:19:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.439 17:19:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.439 17:19:15 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.439 17:19:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.439 17:19:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.439 17:19:15 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.439 17:19:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.439 17:19:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.439 17:19:15 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.439 17:19:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:49.439 17:19:15 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:49.439 17:19:15 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:49.439 17:19:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.378 17:19:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:50.378 17:19:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:50.378 17:19:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:50.378 17:19:16 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:50.378 17:19:16 -- setup/driver.sh@65 -- # setup reset 00:02:50.378 17:19:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.378 17:19:16 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.911 00:02:52.911 real 0m4.446s 00:02:52.911 user 0m0.944s 00:02:52.911 sys 0m1.674s 00:02:52.911 17:19:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:52.911 17:19:19 -- common/autotest_common.sh@10 -- # set +x 00:02:52.911 ************************************ 00:02:52.911 END TEST guess_driver 00:02:52.911 ************************************ 00:02:52.911 00:02:52.911 real 0m6.857s 00:02:52.911 user 0m1.479s 00:02:52.911 sys 0m2.685s 00:02:52.911 17:19:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:52.911 17:19:19 -- common/autotest_common.sh@10 -- # set +x 00:02:52.911 ************************************ 00:02:52.911 END TEST driver 00:02:52.911 ************************************ 00:02:52.911 17:19:19 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:02:52.911 17:19:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:52.911 17:19:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:52.911 17:19:19 -- common/autotest_common.sh@10 -- # set +x 00:02:52.911 ************************************ 00:02:52.911 START TEST devices 00:02:52.911 ************************************ 00:02:52.911 17:19:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:02:52.911 * Looking for test storage... 00:02:52.911 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:02:52.911 17:19:19 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:52.911 17:19:19 -- setup/devices.sh@192 -- # setup reset 00:02:52.911 17:19:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:52.911 17:19:19 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:54.324 17:19:20 -- setup/devices.sh@194 -- # get_zoned_devs 00:02:54.324 17:19:20 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:54.324 17:19:20 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:54.324 17:19:20 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:54.324 17:19:20 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:54.324 17:19:20 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:54.324 17:19:20 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:54.324 17:19:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:54.324 17:19:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:54.324 17:19:20 -- setup/devices.sh@196 -- # blocks=() 00:02:54.324 17:19:20 -- setup/devices.sh@196 -- # declare -a blocks 00:02:54.324 17:19:20 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:02:54.324 17:19:20 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:02:54.324 17:19:20 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:02:54.324 17:19:20 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:02:54.324 17:19:20 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:02:54.324 17:19:20 -- setup/devices.sh@201 -- # ctrl=nvme0 00:02:54.324 17:19:20 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:02:54.324 17:19:20 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:54.324 17:19:20 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:02:54.324 17:19:20 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:02:54.324 17:19:20 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:02:54.324 No valid GPT data, bailing 00:02:54.324 17:19:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:54.324 17:19:20 -- scripts/common.sh@391 -- # pt= 00:02:54.324 17:19:20 -- scripts/common.sh@392 -- # return 1 00:02:54.324 17:19:20 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:02:54.324 17:19:20 -- setup/common.sh@76 -- # local dev=nvme0n1 00:02:54.324 17:19:20 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:02:54.324 17:19:20 -- setup/common.sh@80 -- # echo 1000204886016 00:02:54.324 17:19:20 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:02:54.324 17:19:20 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:02:54.324 17:19:20 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:02:54.324 17:19:20 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:02:54.324 17:19:20 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:02:54.324 17:19:20 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:02:54.324 17:19:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:54.324 17:19:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:54.324 17:19:20 -- common/autotest_common.sh@10 -- # set +x 00:02:54.324 ************************************ 00:02:54.324 START TEST nvme_mount 00:02:54.324 ************************************ 00:02:54.324 17:19:20 -- common/autotest_common.sh@1111 -- # nvme_mount 00:02:54.324 17:19:20 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:02:54.324 17:19:20 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:02:54.324 17:19:20 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:54.324 17:19:20 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:54.324 17:19:20 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:02:54.324 17:19:20 -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:54.324 17:19:20 -- setup/common.sh@40 -- # local part_no=1 00:02:54.324 17:19:20 -- setup/common.sh@41 -- # local size=1073741824 00:02:54.324 17:19:20 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:54.324 17:19:20 -- setup/common.sh@44 -- # parts=() 00:02:54.324 17:19:20 -- setup/common.sh@44 -- # local parts 00:02:54.324 17:19:20 -- setup/common.sh@46 -- # (( part = 1 )) 00:02:54.324 17:19:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:54.324 17:19:20 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:54.324 17:19:20 -- setup/common.sh@46 -- # (( part++ )) 00:02:54.324 17:19:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:54.324 17:19:20 -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:54.324 17:19:20 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:54.324 17:19:20 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:02:55.705 Creating new GPT entries in memory. 00:02:55.705 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:55.705 other utilities. 00:02:55.705 17:19:21 -- setup/common.sh@57 -- # (( part = 1 )) 00:02:55.705 17:19:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:55.705 17:19:21 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:55.705 17:19:21 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:55.705 17:19:21 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:56.644 Creating new GPT entries in memory. 00:02:56.644 The operation has completed successfully. 00:02:56.644 17:19:22 -- setup/common.sh@57 -- # (( part++ )) 00:02:56.644 17:19:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:56.644 17:19:22 -- setup/common.sh@62 -- # wait 1879352 00:02:56.644 17:19:22 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.644 17:19:22 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:02:56.644 17:19:22 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.644 17:19:22 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:02:56.644 17:19:22 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:02:56.644 17:19:22 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.644 17:19:22 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:56.644 17:19:22 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:56.644 17:19:22 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:02:56.644 17:19:22 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:56.644 17:19:22 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:56.644 17:19:22 -- setup/devices.sh@53 -- # local found=0 00:02:56.644 17:19:22 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:56.644 17:19:22 -- setup/devices.sh@56 -- # : 00:02:56.644 17:19:22 -- setup/devices.sh@59 -- # local pci status 00:02:56.644 17:19:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:56.644 17:19:22 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:56.644 17:19:22 -- setup/devices.sh@47 -- # setup output config 00:02:56.644 17:19:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.644 17:19:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:02:57.584 17:19:23 -- setup/devices.sh@63 -- # found=1 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:57.584 17:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.584 17:19:24 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:57.584 17:19:24 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:57.584 17:19:24 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.584 17:19:24 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:57.584 17:19:24 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:57.584 17:19:24 -- setup/devices.sh@110 -- # cleanup_nvme 00:02:57.584 17:19:24 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.584 17:19:24 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.584 17:19:24 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:57.584 17:19:24 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:57.584 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:57.584 17:19:24 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:57.584 17:19:24 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:57.843 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:57.843 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:57.843 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:57.843 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:57.843 17:19:24 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:02:57.843 17:19:24 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:02:57.843 17:19:24 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.843 17:19:24 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:02:57.843 17:19:24 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:02:57.843 17:19:24 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.843 17:19:24 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:57.843 17:19:24 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:57.843 17:19:24 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:02:57.843 17:19:24 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:57.843 17:19:24 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:57.843 17:19:24 -- setup/devices.sh@53 -- # local found=0 00:02:57.843 17:19:24 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:57.843 17:19:24 -- setup/devices.sh@56 -- # : 00:02:57.843 17:19:24 -- setup/devices.sh@59 -- # local pci status 00:02:57.843 17:19:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.843 17:19:24 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:57.843 17:19:24 -- setup/devices.sh@47 -- # setup output config 00:02:57.843 17:19:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.843 17:19:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:02:59.219 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.219 17:19:25 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:02:59.219 17:19:25 -- setup/devices.sh@63 -- # found=1 00:02:59.219 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.219 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.219 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.219 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.219 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.219 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.219 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.219 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.219 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.219 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.219 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.219 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.219 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.219 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.219 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.219 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.219 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.219 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.219 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.220 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.220 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.220 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.220 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.220 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.220 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.220 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.220 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.220 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.220 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.220 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.220 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.220 17:19:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.220 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.220 17:19:25 -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:59.220 17:19:25 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:59.220 17:19:25 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.220 17:19:25 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:59.220 17:19:25 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:59.220 17:19:25 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.220 17:19:25 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:02:59.220 17:19:25 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:59.220 17:19:25 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:02:59.220 17:19:25 -- setup/devices.sh@50 -- # local mount_point= 00:02:59.220 17:19:25 -- setup/devices.sh@51 -- # local test_file= 00:02:59.220 17:19:25 -- setup/devices.sh@53 -- # local found=0 00:02:59.220 17:19:25 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:02:59.220 17:19:25 -- setup/devices.sh@59 -- # local pci status 00:02:59.220 17:19:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.220 17:19:25 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:59.220 17:19:25 -- setup/devices.sh@47 -- # setup output config 00:02:59.220 17:19:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.220 17:19:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:00.158 17:19:26 -- setup/devices.sh@63 -- # found=1 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.158 17:19:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:00.158 17:19:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:00.416 17:19:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:00.416 17:19:26 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:00.416 17:19:26 -- setup/devices.sh@68 -- # return 0 00:03:00.416 17:19:26 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:00.416 17:19:26 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:00.416 17:19:26 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:00.416 17:19:26 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:00.416 17:19:26 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:00.416 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:00.416 00:03:00.416 real 0m5.933s 00:03:00.416 user 0m1.395s 00:03:00.416 sys 0m2.079s 00:03:00.416 17:19:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:00.416 17:19:26 -- common/autotest_common.sh@10 -- # set +x 00:03:00.416 ************************************ 00:03:00.416 END TEST nvme_mount 00:03:00.416 ************************************ 00:03:00.416 17:19:26 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:00.416 17:19:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:00.416 17:19:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:00.416 17:19:26 -- common/autotest_common.sh@10 -- # set +x 00:03:00.416 ************************************ 00:03:00.416 START TEST dm_mount 00:03:00.416 ************************************ 00:03:00.416 17:19:26 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:00.416 17:19:26 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:00.416 17:19:26 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:00.416 17:19:26 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:00.416 17:19:26 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:00.416 17:19:26 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:00.417 17:19:26 -- setup/common.sh@40 -- # local part_no=2 00:03:00.417 17:19:26 -- setup/common.sh@41 -- # local size=1073741824 00:03:00.417 17:19:26 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:00.417 17:19:26 -- setup/common.sh@44 -- # parts=() 00:03:00.417 17:19:26 -- setup/common.sh@44 -- # local parts 00:03:00.417 17:19:26 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:00.417 17:19:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:00.417 17:19:26 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:00.417 17:19:26 -- setup/common.sh@46 -- # (( part++ )) 00:03:00.417 17:19:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:00.417 17:19:26 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:00.417 17:19:26 -- setup/common.sh@46 -- # (( part++ )) 00:03:00.417 17:19:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:00.417 17:19:26 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:00.417 17:19:26 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:00.417 17:19:26 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:01.356 Creating new GPT entries in memory. 00:03:01.356 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:01.356 other utilities. 00:03:01.356 17:19:27 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:01.356 17:19:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:01.356 17:19:27 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:01.356 17:19:27 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:01.356 17:19:27 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:02.738 Creating new GPT entries in memory. 00:03:02.738 The operation has completed successfully. 00:03:02.738 17:19:28 -- setup/common.sh@57 -- # (( part++ )) 00:03:02.738 17:19:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:02.738 17:19:28 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:02.738 17:19:28 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:02.738 17:19:28 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:03.691 The operation has completed successfully. 00:03:03.691 17:19:29 -- setup/common.sh@57 -- # (( part++ )) 00:03:03.691 17:19:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:03.691 17:19:29 -- setup/common.sh@62 -- # wait 1881619 00:03:03.691 17:19:29 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:03.691 17:19:29 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:03.691 17:19:29 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:03.691 17:19:29 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:03.691 17:19:29 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:03.691 17:19:29 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:03.691 17:19:29 -- setup/devices.sh@161 -- # break 00:03:03.691 17:19:29 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:03.691 17:19:29 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:03.691 17:19:29 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:03.691 17:19:29 -- setup/devices.sh@166 -- # dm=dm-0 00:03:03.691 17:19:29 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:03.691 17:19:29 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:03.691 17:19:29 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:03.691 17:19:29 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:03:03.691 17:19:29 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:03.691 17:19:29 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:03.691 17:19:29 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:03.691 17:19:29 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:03.691 17:19:30 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:03.691 17:19:30 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:03.691 17:19:30 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:03.691 17:19:30 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:03.691 17:19:30 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:03.691 17:19:30 -- setup/devices.sh@53 -- # local found=0 00:03:03.691 17:19:30 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:03.691 17:19:30 -- setup/devices.sh@56 -- # : 00:03:03.691 17:19:30 -- setup/devices.sh@59 -- # local pci status 00:03:03.691 17:19:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.691 17:19:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:03.691 17:19:30 -- setup/devices.sh@47 -- # setup output config 00:03:03.691 17:19:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.691 17:19:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:04.628 17:19:31 -- setup/devices.sh@63 -- # found=1 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.628 17:19:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:04.628 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.888 17:19:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:04.888 17:19:31 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:04.888 17:19:31 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:04.888 17:19:31 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:04.888 17:19:31 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:04.888 17:19:31 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:04.888 17:19:31 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:04.888 17:19:31 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:04.888 17:19:31 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:04.888 17:19:31 -- setup/devices.sh@50 -- # local mount_point= 00:03:04.888 17:19:31 -- setup/devices.sh@51 -- # local test_file= 00:03:04.888 17:19:31 -- setup/devices.sh@53 -- # local found=0 00:03:04.888 17:19:31 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:04.888 17:19:31 -- setup/devices.sh@59 -- # local pci status 00:03:04.888 17:19:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.888 17:19:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:04.888 17:19:31 -- setup/devices.sh@47 -- # setup output config 00:03:04.888 17:19:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.888 17:19:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:05.824 17:19:32 -- setup/devices.sh@63 -- # found=1 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:05.824 17:19:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.824 17:19:32 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:05.824 17:19:32 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:05.824 17:19:32 -- setup/devices.sh@68 -- # return 0 00:03:05.824 17:19:32 -- setup/devices.sh@187 -- # cleanup_dm 00:03:05.824 17:19:32 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:05.824 17:19:32 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:05.824 17:19:32 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:06.083 17:19:32 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:06.083 17:19:32 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:06.083 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:06.083 17:19:32 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:06.083 17:19:32 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:06.083 00:03:06.083 real 0m5.535s 00:03:06.083 user 0m0.897s 00:03:06.083 sys 0m1.498s 00:03:06.083 17:19:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:06.083 17:19:32 -- common/autotest_common.sh@10 -- # set +x 00:03:06.083 ************************************ 00:03:06.083 END TEST dm_mount 00:03:06.083 ************************************ 00:03:06.083 17:19:32 -- setup/devices.sh@1 -- # cleanup 00:03:06.083 17:19:32 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:06.083 17:19:32 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.083 17:19:32 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:06.083 17:19:32 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:06.083 17:19:32 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:06.083 17:19:32 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:06.342 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:06.342 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:06.342 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:06.342 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:06.342 17:19:32 -- setup/devices.sh@12 -- # cleanup_dm 00:03:06.342 17:19:32 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:03:06.342 17:19:32 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:06.342 17:19:32 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:06.342 17:19:32 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:06.342 17:19:32 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:06.342 17:19:32 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:06.342 00:03:06.342 real 0m13.473s 00:03:06.342 user 0m2.990s 00:03:06.342 sys 0m4.627s 00:03:06.342 17:19:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:06.342 17:19:32 -- common/autotest_common.sh@10 -- # set +x 00:03:06.342 ************************************ 00:03:06.342 END TEST devices 00:03:06.342 ************************************ 00:03:06.342 00:03:06.342 real 0m42.717s 00:03:06.342 user 0m12.309s 00:03:06.342 sys 0m18.649s 00:03:06.342 17:19:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:06.342 17:19:32 -- common/autotest_common.sh@10 -- # set +x 00:03:06.342 ************************************ 00:03:06.342 END TEST setup.sh 00:03:06.342 ************************************ 00:03:06.342 17:19:32 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:07.277 Hugepages 00:03:07.277 node hugesize free / total 00:03:07.277 node0 1048576kB 0 / 0 00:03:07.277 node0 2048kB 2048 / 2048 00:03:07.277 node1 1048576kB 0 / 0 00:03:07.277 node1 2048kB 0 / 0 00:03:07.277 00:03:07.277 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:07.277 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:07.277 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:07.277 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:07.277 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:07.277 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:07.277 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:07.277 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:07.277 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:07.277 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:07.277 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:07.277 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:07.277 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:07.277 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:07.277 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:07.277 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:07.277 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:07.535 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:07.535 17:19:33 -- spdk/autotest.sh@130 -- # uname -s 00:03:07.535 17:19:33 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:07.535 17:19:33 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:07.535 17:19:33 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:08.471 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:08.471 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:08.471 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:08.471 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:08.471 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:08.471 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:08.471 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:08.471 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:08.471 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:08.471 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:08.471 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:08.730 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:08.730 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:08.730 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:08.730 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:08.730 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:09.666 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:09.666 17:19:36 -- common/autotest_common.sh@1518 -- # sleep 1 00:03:11.049 17:19:37 -- common/autotest_common.sh@1519 -- # bdfs=() 00:03:11.049 17:19:37 -- common/autotest_common.sh@1519 -- # local bdfs 00:03:11.049 17:19:37 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:11.049 17:19:37 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:11.049 17:19:37 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:11.049 17:19:37 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:11.049 17:19:37 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:11.049 17:19:37 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:11.049 17:19:37 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:11.049 17:19:37 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:11.049 17:19:37 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:03:11.049 17:19:37 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.618 Waiting for block devices as requested 00:03:11.879 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:11.879 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:11.879 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:12.139 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:12.139 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:12.139 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:12.139 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:12.139 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:12.401 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:12.401 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:12.401 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:12.401 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:12.694 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:12.694 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:12.694 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:12.694 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:12.953 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:12.953 17:19:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:12.953 17:19:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:12.953 17:19:39 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:03:12.953 17:19:39 -- common/autotest_common.sh@1488 -- # grep 0000:88:00.0/nvme/nvme 00:03:12.953 17:19:39 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:12.953 17:19:39 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:12.953 17:19:39 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:12.953 17:19:39 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:03:12.953 17:19:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:12.953 17:19:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:12.953 17:19:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:12.953 17:19:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:12.953 17:19:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:12.953 17:19:39 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:12.953 17:19:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:12.953 17:19:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:12.953 17:19:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:12.953 17:19:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:12.953 17:19:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:12.953 17:19:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:12.953 17:19:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:12.953 17:19:39 -- common/autotest_common.sh@1543 -- # continue 00:03:12.953 17:19:39 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:12.953 17:19:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:12.953 17:19:39 -- common/autotest_common.sh@10 -- # set +x 00:03:12.953 17:19:39 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:12.953 17:19:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:12.953 17:19:39 -- common/autotest_common.sh@10 -- # set +x 00:03:12.953 17:19:39 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:14.333 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:14.333 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:14.333 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:14.333 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:14.333 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:14.333 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:14.333 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:14.333 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:14.333 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:14.333 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:14.333 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:14.333 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:14.333 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:14.333 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:14.333 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:14.333 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:15.274 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.274 17:19:41 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:15.274 17:19:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:15.274 17:19:41 -- common/autotest_common.sh@10 -- # set +x 00:03:15.274 17:19:41 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:15.274 17:19:41 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:03:15.274 17:19:41 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:03:15.274 17:19:41 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:15.274 17:19:41 -- common/autotest_common.sh@1563 -- # local bdfs 00:03:15.274 17:19:41 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:03:15.274 17:19:41 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:15.274 17:19:41 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:15.274 17:19:41 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:15.274 17:19:41 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:15.274 17:19:41 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:15.533 17:19:41 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:15.533 17:19:41 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:03:15.533 17:19:41 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:15.533 17:19:41 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:15.533 17:19:41 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:15.533 17:19:41 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:15.533 17:19:41 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:15.533 17:19:41 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:88:00.0 00:03:15.533 17:19:41 -- common/autotest_common.sh@1578 -- # [[ -z 0000:88:00.0 ]] 00:03:15.533 17:19:41 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=1886906 00:03:15.533 17:19:41 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:15.533 17:19:41 -- common/autotest_common.sh@1584 -- # waitforlisten 1886906 00:03:15.533 17:19:41 -- common/autotest_common.sh@817 -- # '[' -z 1886906 ']' 00:03:15.533 17:19:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:15.533 17:19:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:15.533 17:19:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:15.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:15.533 17:19:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:15.533 17:19:41 -- common/autotest_common.sh@10 -- # set +x 00:03:15.533 [2024-04-18 17:19:41.882307] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:03:15.533 [2024-04-18 17:19:41.882409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1886906 ] 00:03:15.533 EAL: No free 2048 kB hugepages reported on node 1 00:03:15.533 [2024-04-18 17:19:41.943369] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:15.533 [2024-04-18 17:19:42.061067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:15.793 17:19:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:15.793 17:19:42 -- common/autotest_common.sh@850 -- # return 0 00:03:15.793 17:19:42 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:03:15.793 17:19:42 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:03:15.793 17:19:42 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:19.087 nvme0n1 00:03:19.087 17:19:45 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:19.087 [2024-04-18 17:19:45.618625] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:19.087 [2024-04-18 17:19:45.618671] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:19.087 request: 00:03:19.087 { 00:03:19.087 "nvme_ctrlr_name": "nvme0", 00:03:19.087 "password": "test", 00:03:19.087 "method": "bdev_nvme_opal_revert", 00:03:19.087 "req_id": 1 00:03:19.087 } 00:03:19.087 Got JSON-RPC error response 00:03:19.087 response: 00:03:19.087 { 00:03:19.087 "code": -32603, 00:03:19.087 "message": "Internal error" 00:03:19.087 } 00:03:19.347 17:19:45 -- common/autotest_common.sh@1590 -- # true 00:03:19.347 17:19:45 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:03:19.347 17:19:45 -- common/autotest_common.sh@1594 -- # killprocess 1886906 00:03:19.347 17:19:45 -- common/autotest_common.sh@936 -- # '[' -z 1886906 ']' 00:03:19.347 17:19:45 -- common/autotest_common.sh@940 -- # kill -0 1886906 00:03:19.347 17:19:45 -- common/autotest_common.sh@941 -- # uname 00:03:19.347 17:19:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:19.347 17:19:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1886906 00:03:19.347 17:19:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:19.347 17:19:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:19.347 17:19:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1886906' 00:03:19.347 killing process with pid 1886906 00:03:19.347 17:19:45 -- common/autotest_common.sh@955 -- # kill 1886906 00:03:19.347 17:19:45 -- common/autotest_common.sh@960 -- # wait 1886906 00:03:21.256 17:19:47 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:21.256 17:19:47 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:21.256 17:19:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:21.256 17:19:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:21.256 17:19:47 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:21.256 17:19:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:21.256 17:19:47 -- common/autotest_common.sh@10 -- # set +x 00:03:21.256 17:19:47 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:21.256 17:19:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.256 17:19:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.256 17:19:47 -- common/autotest_common.sh@10 -- # set +x 00:03:21.256 ************************************ 00:03:21.256 START TEST env 00:03:21.256 ************************************ 00:03:21.256 17:19:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:21.256 * Looking for test storage... 00:03:21.256 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:03:21.256 17:19:47 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:21.256 17:19:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.256 17:19:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.256 17:19:47 -- common/autotest_common.sh@10 -- # set +x 00:03:21.256 ************************************ 00:03:21.256 START TEST env_memory 00:03:21.256 ************************************ 00:03:21.256 17:19:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:21.256 00:03:21.256 00:03:21.256 CUnit - A unit testing framework for C - Version 2.1-3 00:03:21.256 http://cunit.sourceforge.net/ 00:03:21.256 00:03:21.256 00:03:21.256 Suite: memory 00:03:21.256 Test: alloc and free memory map ...[2024-04-18 17:19:47.763713] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:21.256 passed 00:03:21.256 Test: mem map translation ...[2024-04-18 17:19:47.784839] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:21.256 [2024-04-18 17:19:47.784860] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:21.256 [2024-04-18 17:19:47.784917] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:21.256 [2024-04-18 17:19:47.784930] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:21.517 passed 00:03:21.517 Test: mem map registration ...[2024-04-18 17:19:47.828577] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:21.517 [2024-04-18 17:19:47.828596] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:21.517 passed 00:03:21.517 Test: mem map adjacent registrations ...passed 00:03:21.517 00:03:21.517 Run Summary: Type Total Ran Passed Failed Inactive 00:03:21.517 suites 1 1 n/a 0 0 00:03:21.517 tests 4 4 4 0 0 00:03:21.517 asserts 152 152 152 0 n/a 00:03:21.517 00:03:21.517 Elapsed time = 0.145 seconds 00:03:21.517 00:03:21.517 real 0m0.152s 00:03:21.517 user 0m0.143s 00:03:21.517 sys 0m0.008s 00:03:21.517 17:19:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:21.517 17:19:47 -- common/autotest_common.sh@10 -- # set +x 00:03:21.517 ************************************ 00:03:21.517 END TEST env_memory 00:03:21.517 ************************************ 00:03:21.517 17:19:47 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:21.517 17:19:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.517 17:19:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.517 17:19:47 -- common/autotest_common.sh@10 -- # set +x 00:03:21.517 ************************************ 00:03:21.517 START TEST env_vtophys 00:03:21.517 ************************************ 00:03:21.517 17:19:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:21.517 EAL: lib.eal log level changed from notice to debug 00:03:21.517 EAL: Detected lcore 0 as core 0 on socket 0 00:03:21.517 EAL: Detected lcore 1 as core 1 on socket 0 00:03:21.517 EAL: Detected lcore 2 as core 2 on socket 0 00:03:21.517 EAL: Detected lcore 3 as core 3 on socket 0 00:03:21.517 EAL: Detected lcore 4 as core 4 on socket 0 00:03:21.517 EAL: Detected lcore 5 as core 5 on socket 0 00:03:21.517 EAL: Detected lcore 6 as core 8 on socket 0 00:03:21.517 EAL: Detected lcore 7 as core 9 on socket 0 00:03:21.517 EAL: Detected lcore 8 as core 10 on socket 0 00:03:21.517 EAL: Detected lcore 9 as core 11 on socket 0 00:03:21.517 EAL: Detected lcore 10 as core 12 on socket 0 00:03:21.517 EAL: Detected lcore 11 as core 13 on socket 0 00:03:21.517 EAL: Detected lcore 12 as core 0 on socket 1 00:03:21.517 EAL: Detected lcore 13 as core 1 on socket 1 00:03:21.517 EAL: Detected lcore 14 as core 2 on socket 1 00:03:21.517 EAL: Detected lcore 15 as core 3 on socket 1 00:03:21.517 EAL: Detected lcore 16 as core 4 on socket 1 00:03:21.517 EAL: Detected lcore 17 as core 5 on socket 1 00:03:21.517 EAL: Detected lcore 18 as core 8 on socket 1 00:03:21.517 EAL: Detected lcore 19 as core 9 on socket 1 00:03:21.517 EAL: Detected lcore 20 as core 10 on socket 1 00:03:21.517 EAL: Detected lcore 21 as core 11 on socket 1 00:03:21.517 EAL: Detected lcore 22 as core 12 on socket 1 00:03:21.517 EAL: Detected lcore 23 as core 13 on socket 1 00:03:21.517 EAL: Detected lcore 24 as core 0 on socket 0 00:03:21.517 EAL: Detected lcore 25 as core 1 on socket 0 00:03:21.517 EAL: Detected lcore 26 as core 2 on socket 0 00:03:21.517 EAL: Detected lcore 27 as core 3 on socket 0 00:03:21.517 EAL: Detected lcore 28 as core 4 on socket 0 00:03:21.517 EAL: Detected lcore 29 as core 5 on socket 0 00:03:21.517 EAL: Detected lcore 30 as core 8 on socket 0 00:03:21.517 EAL: Detected lcore 31 as core 9 on socket 0 00:03:21.517 EAL: Detected lcore 32 as core 10 on socket 0 00:03:21.517 EAL: Detected lcore 33 as core 11 on socket 0 00:03:21.517 EAL: Detected lcore 34 as core 12 on socket 0 00:03:21.517 EAL: Detected lcore 35 as core 13 on socket 0 00:03:21.517 EAL: Detected lcore 36 as core 0 on socket 1 00:03:21.517 EAL: Detected lcore 37 as core 1 on socket 1 00:03:21.517 EAL: Detected lcore 38 as core 2 on socket 1 00:03:21.517 EAL: Detected lcore 39 as core 3 on socket 1 00:03:21.517 EAL: Detected lcore 40 as core 4 on socket 1 00:03:21.517 EAL: Detected lcore 41 as core 5 on socket 1 00:03:21.517 EAL: Detected lcore 42 as core 8 on socket 1 00:03:21.517 EAL: Detected lcore 43 as core 9 on socket 1 00:03:21.517 EAL: Detected lcore 44 as core 10 on socket 1 00:03:21.517 EAL: Detected lcore 45 as core 11 on socket 1 00:03:21.517 EAL: Detected lcore 46 as core 12 on socket 1 00:03:21.517 EAL: Detected lcore 47 as core 13 on socket 1 00:03:21.517 EAL: Maximum logical cores by configuration: 128 00:03:21.517 EAL: Detected CPU lcores: 48 00:03:21.517 EAL: Detected NUMA nodes: 2 00:03:21.517 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:21.517 EAL: Detected shared linkage of DPDK 00:03:21.517 EAL: No shared files mode enabled, IPC will be disabled 00:03:21.517 EAL: Bus pci wants IOVA as 'DC' 00:03:21.517 EAL: Buses did not request a specific IOVA mode. 00:03:21.517 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:21.517 EAL: Selected IOVA mode 'VA' 00:03:21.517 EAL: No free 2048 kB hugepages reported on node 1 00:03:21.517 EAL: Probing VFIO support... 00:03:21.517 EAL: IOMMU type 1 (Type 1) is supported 00:03:21.517 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:21.517 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:21.517 EAL: VFIO support initialized 00:03:21.517 EAL: Ask a virtual area of 0x2e000 bytes 00:03:21.517 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:21.517 EAL: Setting up physically contiguous memory... 00:03:21.517 EAL: Setting maximum number of open files to 524288 00:03:21.517 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:21.517 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:21.517 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:21.517 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.517 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:21.517 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.517 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.517 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:21.517 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:21.517 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.517 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:21.517 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.517 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.517 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:21.517 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:21.517 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.517 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:21.517 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.517 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.517 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:21.517 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:21.517 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.517 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:21.517 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:21.517 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.517 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:21.517 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:21.517 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:21.517 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.517 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:21.517 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:21.517 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.517 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:21.517 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:21.517 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.517 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:21.517 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:21.517 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.517 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:21.517 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:21.517 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.517 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:21.517 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:21.517 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.517 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:21.517 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:21.517 EAL: Ask a virtual area of 0x61000 bytes 00:03:21.517 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:21.517 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:21.517 EAL: Ask a virtual area of 0x400000000 bytes 00:03:21.517 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:21.517 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:21.517 EAL: Hugepages will be freed exactly as allocated. 00:03:21.517 EAL: No shared files mode enabled, IPC is disabled 00:03:21.517 EAL: No shared files mode enabled, IPC is disabled 00:03:21.517 EAL: TSC frequency is ~2700000 KHz 00:03:21.517 EAL: Main lcore 0 is ready (tid=7fb3e8593a00;cpuset=[0]) 00:03:21.517 EAL: Trying to obtain current memory policy. 00:03:21.517 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.517 EAL: Restoring previous memory policy: 0 00:03:21.517 EAL: request: mp_malloc_sync 00:03:21.517 EAL: No shared files mode enabled, IPC is disabled 00:03:21.517 EAL: Heap on socket 0 was expanded by 2MB 00:03:21.517 EAL: No shared files mode enabled, IPC is disabled 00:03:21.778 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:21.778 EAL: Mem event callback 'spdk:(nil)' registered 00:03:21.778 00:03:21.778 00:03:21.778 CUnit - A unit testing framework for C - Version 2.1-3 00:03:21.778 http://cunit.sourceforge.net/ 00:03:21.778 00:03:21.778 00:03:21.778 Suite: components_suite 00:03:21.778 Test: vtophys_malloc_test ...passed 00:03:21.778 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:21.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.778 EAL: Restoring previous memory policy: 4 00:03:21.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.778 EAL: request: mp_malloc_sync 00:03:21.778 EAL: No shared files mode enabled, IPC is disabled 00:03:21.778 EAL: Heap on socket 0 was expanded by 4MB 00:03:21.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.778 EAL: request: mp_malloc_sync 00:03:21.778 EAL: No shared files mode enabled, IPC is disabled 00:03:21.778 EAL: Heap on socket 0 was shrunk by 4MB 00:03:21.778 EAL: Trying to obtain current memory policy. 00:03:21.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.778 EAL: Restoring previous memory policy: 4 00:03:21.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.778 EAL: request: mp_malloc_sync 00:03:21.778 EAL: No shared files mode enabled, IPC is disabled 00:03:21.778 EAL: Heap on socket 0 was expanded by 6MB 00:03:21.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.778 EAL: request: mp_malloc_sync 00:03:21.778 EAL: No shared files mode enabled, IPC is disabled 00:03:21.778 EAL: Heap on socket 0 was shrunk by 6MB 00:03:21.778 EAL: Trying to obtain current memory policy. 00:03:21.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.778 EAL: Restoring previous memory policy: 4 00:03:21.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.778 EAL: request: mp_malloc_sync 00:03:21.778 EAL: No shared files mode enabled, IPC is disabled 00:03:21.778 EAL: Heap on socket 0 was expanded by 10MB 00:03:21.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.778 EAL: request: mp_malloc_sync 00:03:21.778 EAL: No shared files mode enabled, IPC is disabled 00:03:21.778 EAL: Heap on socket 0 was shrunk by 10MB 00:03:21.778 EAL: Trying to obtain current memory policy. 00:03:21.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.778 EAL: Restoring previous memory policy: 4 00:03:21.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.778 EAL: request: mp_malloc_sync 00:03:21.778 EAL: No shared files mode enabled, IPC is disabled 00:03:21.778 EAL: Heap on socket 0 was expanded by 18MB 00:03:21.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.778 EAL: request: mp_malloc_sync 00:03:21.778 EAL: No shared files mode enabled, IPC is disabled 00:03:21.778 EAL: Heap on socket 0 was shrunk by 18MB 00:03:21.778 EAL: Trying to obtain current memory policy. 00:03:21.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.778 EAL: Restoring previous memory policy: 4 00:03:21.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.778 EAL: request: mp_malloc_sync 00:03:21.778 EAL: No shared files mode enabled, IPC is disabled 00:03:21.778 EAL: Heap on socket 0 was expanded by 34MB 00:03:21.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.778 EAL: request: mp_malloc_sync 00:03:21.778 EAL: No shared files mode enabled, IPC is disabled 00:03:21.778 EAL: Heap on socket 0 was shrunk by 34MB 00:03:21.778 EAL: Trying to obtain current memory policy. 00:03:21.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.778 EAL: Restoring previous memory policy: 4 00:03:21.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.778 EAL: request: mp_malloc_sync 00:03:21.778 EAL: No shared files mode enabled, IPC is disabled 00:03:21.778 EAL: Heap on socket 0 was expanded by 66MB 00:03:21.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.778 EAL: request: mp_malloc_sync 00:03:21.778 EAL: No shared files mode enabled, IPC is disabled 00:03:21.778 EAL: Heap on socket 0 was shrunk by 66MB 00:03:21.778 EAL: Trying to obtain current memory policy. 00:03:21.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.778 EAL: Restoring previous memory policy: 4 00:03:21.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.778 EAL: request: mp_malloc_sync 00:03:21.778 EAL: No shared files mode enabled, IPC is disabled 00:03:21.778 EAL: Heap on socket 0 was expanded by 130MB 00:03:21.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.778 EAL: request: mp_malloc_sync 00:03:21.778 EAL: No shared files mode enabled, IPC is disabled 00:03:21.778 EAL: Heap on socket 0 was shrunk by 130MB 00:03:21.778 EAL: Trying to obtain current memory policy. 00:03:21.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:21.778 EAL: Restoring previous memory policy: 4 00:03:21.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:21.778 EAL: request: mp_malloc_sync 00:03:21.778 EAL: No shared files mode enabled, IPC is disabled 00:03:21.778 EAL: Heap on socket 0 was expanded by 258MB 00:03:22.037 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.037 EAL: request: mp_malloc_sync 00:03:22.037 EAL: No shared files mode enabled, IPC is disabled 00:03:22.037 EAL: Heap on socket 0 was shrunk by 258MB 00:03:22.037 EAL: Trying to obtain current memory policy. 00:03:22.037 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.037 EAL: Restoring previous memory policy: 4 00:03:22.037 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.037 EAL: request: mp_malloc_sync 00:03:22.037 EAL: No shared files mode enabled, IPC is disabled 00:03:22.037 EAL: Heap on socket 0 was expanded by 514MB 00:03:22.294 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.294 EAL: request: mp_malloc_sync 00:03:22.294 EAL: No shared files mode enabled, IPC is disabled 00:03:22.294 EAL: Heap on socket 0 was shrunk by 514MB 00:03:22.294 EAL: Trying to obtain current memory policy. 00:03:22.294 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:22.554 EAL: Restoring previous memory policy: 4 00:03:22.554 EAL: Calling mem event callback 'spdk:(nil)' 00:03:22.554 EAL: request: mp_malloc_sync 00:03:22.554 EAL: No shared files mode enabled, IPC is disabled 00:03:22.554 EAL: Heap on socket 0 was expanded by 1026MB 00:03:22.814 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.072 EAL: request: mp_malloc_sync 00:03:23.072 EAL: No shared files mode enabled, IPC is disabled 00:03:23.072 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:23.072 passed 00:03:23.072 00:03:23.072 Run Summary: Type Total Ran Passed Failed Inactive 00:03:23.073 suites 1 1 n/a 0 0 00:03:23.073 tests 2 2 2 0 0 00:03:23.073 asserts 497 497 497 0 n/a 00:03:23.073 00:03:23.073 Elapsed time = 1.385 seconds 00:03:23.073 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.073 EAL: request: mp_malloc_sync 00:03:23.073 EAL: No shared files mode enabled, IPC is disabled 00:03:23.073 EAL: Heap on socket 0 was shrunk by 2MB 00:03:23.073 EAL: No shared files mode enabled, IPC is disabled 00:03:23.073 EAL: No shared files mode enabled, IPC is disabled 00:03:23.073 EAL: No shared files mode enabled, IPC is disabled 00:03:23.073 00:03:23.073 real 0m1.502s 00:03:23.073 user 0m0.867s 00:03:23.073 sys 0m0.602s 00:03:23.073 17:19:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:23.073 17:19:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.073 ************************************ 00:03:23.073 END TEST env_vtophys 00:03:23.073 ************************************ 00:03:23.073 17:19:49 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:23.073 17:19:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:23.073 17:19:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:23.073 17:19:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.333 ************************************ 00:03:23.333 START TEST env_pci 00:03:23.333 ************************************ 00:03:23.333 17:19:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:23.333 00:03:23.333 00:03:23.333 CUnit - A unit testing framework for C - Version 2.1-3 00:03:23.333 http://cunit.sourceforge.net/ 00:03:23.333 00:03:23.333 00:03:23.333 Suite: pci 00:03:23.333 Test: pci_hook ...[2024-04-18 17:19:49.636939] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1887914 has claimed it 00:03:23.333 EAL: Cannot find device (10000:00:01.0) 00:03:23.333 EAL: Failed to attach device on primary process 00:03:23.333 passed 00:03:23.333 00:03:23.333 Run Summary: Type Total Ran Passed Failed Inactive 00:03:23.333 suites 1 1 n/a 0 0 00:03:23.333 tests 1 1 1 0 0 00:03:23.333 asserts 25 25 25 0 n/a 00:03:23.333 00:03:23.333 Elapsed time = 0.022 seconds 00:03:23.333 00:03:23.333 real 0m0.034s 00:03:23.333 user 0m0.004s 00:03:23.333 sys 0m0.029s 00:03:23.333 17:19:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:23.333 17:19:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.333 ************************************ 00:03:23.333 END TEST env_pci 00:03:23.333 ************************************ 00:03:23.333 17:19:49 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:23.333 17:19:49 -- env/env.sh@15 -- # uname 00:03:23.333 17:19:49 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:23.333 17:19:49 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:23.333 17:19:49 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:23.333 17:19:49 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:03:23.333 17:19:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:23.333 17:19:49 -- common/autotest_common.sh@10 -- # set +x 00:03:23.333 ************************************ 00:03:23.333 START TEST env_dpdk_post_init 00:03:23.333 ************************************ 00:03:23.333 17:19:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:23.333 EAL: Detected CPU lcores: 48 00:03:23.333 EAL: Detected NUMA nodes: 2 00:03:23.333 EAL: Detected shared linkage of DPDK 00:03:23.333 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:23.333 EAL: Selected IOVA mode 'VA' 00:03:23.333 EAL: No free 2048 kB hugepages reported on node 1 00:03:23.333 EAL: VFIO support initialized 00:03:23.333 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:23.594 EAL: Using IOMMU type 1 (Type 1) 00:03:23.594 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:23.594 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:23.594 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:23.594 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:23.594 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:23.594 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:23.594 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:23.594 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:23.594 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:23.594 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:23.594 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:23.594 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:23.594 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:23.594 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:23.594 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:23.594 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:24.534 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:27.827 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:27.827 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:27.827 Starting DPDK initialization... 00:03:27.827 Starting SPDK post initialization... 00:03:27.827 SPDK NVMe probe 00:03:27.827 Attaching to 0000:88:00.0 00:03:27.827 Attached to 0000:88:00.0 00:03:27.827 Cleaning up... 00:03:27.827 00:03:27.827 real 0m4.384s 00:03:27.827 user 0m3.242s 00:03:27.827 sys 0m0.202s 00:03:27.827 17:19:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:27.827 17:19:54 -- common/autotest_common.sh@10 -- # set +x 00:03:27.827 ************************************ 00:03:27.827 END TEST env_dpdk_post_init 00:03:27.827 ************************************ 00:03:27.827 17:19:54 -- env/env.sh@26 -- # uname 00:03:27.827 17:19:54 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:27.827 17:19:54 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:27.827 17:19:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:27.827 17:19:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:27.827 17:19:54 -- common/autotest_common.sh@10 -- # set +x 00:03:27.827 ************************************ 00:03:27.827 START TEST env_mem_callbacks 00:03:27.827 ************************************ 00:03:27.827 17:19:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:27.827 EAL: Detected CPU lcores: 48 00:03:27.827 EAL: Detected NUMA nodes: 2 00:03:27.827 EAL: Detected shared linkage of DPDK 00:03:27.827 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:27.827 EAL: Selected IOVA mode 'VA' 00:03:27.827 EAL: No free 2048 kB hugepages reported on node 1 00:03:27.827 EAL: VFIO support initialized 00:03:27.827 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:27.827 00:03:27.828 00:03:27.828 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.828 http://cunit.sourceforge.net/ 00:03:27.828 00:03:27.828 00:03:27.828 Suite: memory 00:03:27.828 Test: test ... 00:03:27.828 register 0x200000200000 2097152 00:03:27.828 malloc 3145728 00:03:27.828 register 0x200000400000 4194304 00:03:27.828 buf 0x200000500000 len 3145728 PASSED 00:03:27.828 malloc 64 00:03:27.828 buf 0x2000004fff40 len 64 PASSED 00:03:27.828 malloc 4194304 00:03:27.828 register 0x200000800000 6291456 00:03:27.828 buf 0x200000a00000 len 4194304 PASSED 00:03:27.828 free 0x200000500000 3145728 00:03:27.828 free 0x2000004fff40 64 00:03:27.828 unregister 0x200000400000 4194304 PASSED 00:03:27.828 free 0x200000a00000 4194304 00:03:27.828 unregister 0x200000800000 6291456 PASSED 00:03:27.828 malloc 8388608 00:03:27.828 register 0x200000400000 10485760 00:03:27.828 buf 0x200000600000 len 8388608 PASSED 00:03:27.828 free 0x200000600000 8388608 00:03:27.828 unregister 0x200000400000 10485760 PASSED 00:03:27.828 passed 00:03:27.828 00:03:27.828 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.828 suites 1 1 n/a 0 0 00:03:27.828 tests 1 1 1 0 0 00:03:27.828 asserts 15 15 15 0 n/a 00:03:27.828 00:03:27.828 Elapsed time = 0.005 seconds 00:03:27.828 00:03:27.828 real 0m0.049s 00:03:27.828 user 0m0.017s 00:03:27.828 sys 0m0.032s 00:03:27.828 17:19:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:27.828 17:19:54 -- common/autotest_common.sh@10 -- # set +x 00:03:27.828 ************************************ 00:03:27.828 END TEST env_mem_callbacks 00:03:27.828 ************************************ 00:03:27.828 00:03:27.828 real 0m6.779s 00:03:27.828 user 0m4.495s 00:03:27.828 sys 0m1.265s 00:03:27.828 17:19:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:27.828 17:19:54 -- common/autotest_common.sh@10 -- # set +x 00:03:27.828 ************************************ 00:03:27.828 END TEST env 00:03:27.828 ************************************ 00:03:28.086 17:19:54 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:28.086 17:19:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:28.086 17:19:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:28.086 17:19:54 -- common/autotest_common.sh@10 -- # set +x 00:03:28.086 ************************************ 00:03:28.086 START TEST rpc 00:03:28.086 ************************************ 00:03:28.086 17:19:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:28.086 * Looking for test storage... 00:03:28.086 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:28.086 17:19:54 -- rpc/rpc.sh@65 -- # spdk_pid=1888626 00:03:28.086 17:19:54 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:28.086 17:19:54 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:28.086 17:19:54 -- rpc/rpc.sh@67 -- # waitforlisten 1888626 00:03:28.086 17:19:54 -- common/autotest_common.sh@817 -- # '[' -z 1888626 ']' 00:03:28.086 17:19:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:28.086 17:19:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:28.086 17:19:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:28.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:28.086 17:19:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:28.086 17:19:54 -- common/autotest_common.sh@10 -- # set +x 00:03:28.086 [2024-04-18 17:19:54.580909] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:03:28.086 [2024-04-18 17:19:54.581005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1888626 ] 00:03:28.086 EAL: No free 2048 kB hugepages reported on node 1 00:03:28.346 [2024-04-18 17:19:54.639192] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:28.346 [2024-04-18 17:19:54.743599] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:28.346 [2024-04-18 17:19:54.743663] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1888626' to capture a snapshot of events at runtime. 00:03:28.346 [2024-04-18 17:19:54.743677] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:28.346 [2024-04-18 17:19:54.743689] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:28.346 [2024-04-18 17:19:54.743700] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1888626 for offline analysis/debug. 00:03:28.346 [2024-04-18 17:19:54.743728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:28.605 17:19:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:28.605 17:19:55 -- common/autotest_common.sh@850 -- # return 0 00:03:28.605 17:19:55 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:28.605 17:19:55 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:28.605 17:19:55 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:28.605 17:19:55 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:28.605 17:19:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:28.605 17:19:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:28.605 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:28.605 ************************************ 00:03:28.605 START TEST rpc_integrity 00:03:28.605 ************************************ 00:03:28.605 17:19:55 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:03:28.605 17:19:55 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:28.605 17:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:28.605 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:28.605 17:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:28.605 17:19:55 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:28.605 17:19:55 -- rpc/rpc.sh@13 -- # jq length 00:03:28.863 17:19:55 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:28.864 17:19:55 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:28.864 17:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:28.864 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:28.864 17:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:28.864 17:19:55 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:28.864 17:19:55 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:28.864 17:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:28.864 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:28.864 17:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:28.864 17:19:55 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:28.864 { 00:03:28.864 "name": "Malloc0", 00:03:28.864 "aliases": [ 00:03:28.864 "8f0ebeea-9fe3-4d06-bbee-da510b94a118" 00:03:28.864 ], 00:03:28.864 "product_name": "Malloc disk", 00:03:28.864 "block_size": 512, 00:03:28.864 "num_blocks": 16384, 00:03:28.864 "uuid": "8f0ebeea-9fe3-4d06-bbee-da510b94a118", 00:03:28.864 "assigned_rate_limits": { 00:03:28.864 "rw_ios_per_sec": 0, 00:03:28.864 "rw_mbytes_per_sec": 0, 00:03:28.864 "r_mbytes_per_sec": 0, 00:03:28.864 "w_mbytes_per_sec": 0 00:03:28.864 }, 00:03:28.864 "claimed": false, 00:03:28.864 "zoned": false, 00:03:28.864 "supported_io_types": { 00:03:28.864 "read": true, 00:03:28.864 "write": true, 00:03:28.864 "unmap": true, 00:03:28.864 "write_zeroes": true, 00:03:28.864 "flush": true, 00:03:28.864 "reset": true, 00:03:28.864 "compare": false, 00:03:28.864 "compare_and_write": false, 00:03:28.864 "abort": true, 00:03:28.864 "nvme_admin": false, 00:03:28.864 "nvme_io": false 00:03:28.864 }, 00:03:28.864 "memory_domains": [ 00:03:28.864 { 00:03:28.864 "dma_device_id": "system", 00:03:28.864 "dma_device_type": 1 00:03:28.864 }, 00:03:28.864 { 00:03:28.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.864 "dma_device_type": 2 00:03:28.864 } 00:03:28.864 ], 00:03:28.864 "driver_specific": {} 00:03:28.864 } 00:03:28.864 ]' 00:03:28.864 17:19:55 -- rpc/rpc.sh@17 -- # jq length 00:03:28.864 17:19:55 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:28.864 17:19:55 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:28.864 17:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:28.864 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:28.864 [2024-04-18 17:19:55.214619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:28.864 [2024-04-18 17:19:55.214661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:28.864 [2024-04-18 17:19:55.214712] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb1f820 00:03:28.864 [2024-04-18 17:19:55.214742] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:28.864 [2024-04-18 17:19:55.216320] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:28.864 [2024-04-18 17:19:55.216348] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:28.864 Passthru0 00:03:28.864 17:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:28.864 17:19:55 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:28.864 17:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:28.864 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:28.864 17:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:28.864 17:19:55 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:28.864 { 00:03:28.864 "name": "Malloc0", 00:03:28.864 "aliases": [ 00:03:28.864 "8f0ebeea-9fe3-4d06-bbee-da510b94a118" 00:03:28.864 ], 00:03:28.864 "product_name": "Malloc disk", 00:03:28.864 "block_size": 512, 00:03:28.864 "num_blocks": 16384, 00:03:28.864 "uuid": "8f0ebeea-9fe3-4d06-bbee-da510b94a118", 00:03:28.864 "assigned_rate_limits": { 00:03:28.864 "rw_ios_per_sec": 0, 00:03:28.864 "rw_mbytes_per_sec": 0, 00:03:28.864 "r_mbytes_per_sec": 0, 00:03:28.864 "w_mbytes_per_sec": 0 00:03:28.864 }, 00:03:28.864 "claimed": true, 00:03:28.864 "claim_type": "exclusive_write", 00:03:28.864 "zoned": false, 00:03:28.864 "supported_io_types": { 00:03:28.864 "read": true, 00:03:28.864 "write": true, 00:03:28.864 "unmap": true, 00:03:28.864 "write_zeroes": true, 00:03:28.864 "flush": true, 00:03:28.864 "reset": true, 00:03:28.864 "compare": false, 00:03:28.864 "compare_and_write": false, 00:03:28.864 "abort": true, 00:03:28.864 "nvme_admin": false, 00:03:28.864 "nvme_io": false 00:03:28.864 }, 00:03:28.864 "memory_domains": [ 00:03:28.864 { 00:03:28.864 "dma_device_id": "system", 00:03:28.864 "dma_device_type": 1 00:03:28.864 }, 00:03:28.864 { 00:03:28.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.864 "dma_device_type": 2 00:03:28.864 } 00:03:28.864 ], 00:03:28.864 "driver_specific": {} 00:03:28.864 }, 00:03:28.864 { 00:03:28.864 "name": "Passthru0", 00:03:28.864 "aliases": [ 00:03:28.864 "bd505e9b-62e0-5577-954d-5315a4a5bc25" 00:03:28.864 ], 00:03:28.864 "product_name": "passthru", 00:03:28.864 "block_size": 512, 00:03:28.864 "num_blocks": 16384, 00:03:28.864 "uuid": "bd505e9b-62e0-5577-954d-5315a4a5bc25", 00:03:28.864 "assigned_rate_limits": { 00:03:28.864 "rw_ios_per_sec": 0, 00:03:28.864 "rw_mbytes_per_sec": 0, 00:03:28.864 "r_mbytes_per_sec": 0, 00:03:28.864 "w_mbytes_per_sec": 0 00:03:28.864 }, 00:03:28.864 "claimed": false, 00:03:28.864 "zoned": false, 00:03:28.864 "supported_io_types": { 00:03:28.864 "read": true, 00:03:28.864 "write": true, 00:03:28.864 "unmap": true, 00:03:28.864 "write_zeroes": true, 00:03:28.864 "flush": true, 00:03:28.864 "reset": true, 00:03:28.864 "compare": false, 00:03:28.864 "compare_and_write": false, 00:03:28.864 "abort": true, 00:03:28.864 "nvme_admin": false, 00:03:28.864 "nvme_io": false 00:03:28.864 }, 00:03:28.864 "memory_domains": [ 00:03:28.864 { 00:03:28.864 "dma_device_id": "system", 00:03:28.864 "dma_device_type": 1 00:03:28.864 }, 00:03:28.864 { 00:03:28.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:28.864 "dma_device_type": 2 00:03:28.864 } 00:03:28.864 ], 00:03:28.864 "driver_specific": { 00:03:28.864 "passthru": { 00:03:28.864 "name": "Passthru0", 00:03:28.864 "base_bdev_name": "Malloc0" 00:03:28.864 } 00:03:28.864 } 00:03:28.864 } 00:03:28.864 ]' 00:03:28.864 17:19:55 -- rpc/rpc.sh@21 -- # jq length 00:03:28.864 17:19:55 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:28.864 17:19:55 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:28.864 17:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:28.864 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:28.864 17:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:28.864 17:19:55 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:28.864 17:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:28.864 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:28.864 17:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:28.864 17:19:55 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:28.864 17:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:28.864 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:28.864 17:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:28.864 17:19:55 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:28.864 17:19:55 -- rpc/rpc.sh@26 -- # jq length 00:03:28.864 17:19:55 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:28.864 00:03:28.864 real 0m0.233s 00:03:28.864 user 0m0.155s 00:03:28.864 sys 0m0.021s 00:03:28.864 17:19:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:28.864 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:28.864 ************************************ 00:03:28.864 END TEST rpc_integrity 00:03:28.864 ************************************ 00:03:28.865 17:19:55 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:28.865 17:19:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:28.865 17:19:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:28.865 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:29.123 ************************************ 00:03:29.123 START TEST rpc_plugins 00:03:29.123 ************************************ 00:03:29.123 17:19:55 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:03:29.123 17:19:55 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:29.123 17:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:29.123 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:29.123 17:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:29.123 17:19:55 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:29.123 17:19:55 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:29.123 17:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:29.123 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:29.123 17:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:29.123 17:19:55 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:29.123 { 00:03:29.123 "name": "Malloc1", 00:03:29.123 "aliases": [ 00:03:29.123 "bc810c96-0ed1-47af-a5f6-fea6d5c0837f" 00:03:29.123 ], 00:03:29.123 "product_name": "Malloc disk", 00:03:29.123 "block_size": 4096, 00:03:29.123 "num_blocks": 256, 00:03:29.123 "uuid": "bc810c96-0ed1-47af-a5f6-fea6d5c0837f", 00:03:29.123 "assigned_rate_limits": { 00:03:29.123 "rw_ios_per_sec": 0, 00:03:29.123 "rw_mbytes_per_sec": 0, 00:03:29.123 "r_mbytes_per_sec": 0, 00:03:29.123 "w_mbytes_per_sec": 0 00:03:29.123 }, 00:03:29.123 "claimed": false, 00:03:29.123 "zoned": false, 00:03:29.123 "supported_io_types": { 00:03:29.123 "read": true, 00:03:29.123 "write": true, 00:03:29.123 "unmap": true, 00:03:29.123 "write_zeroes": true, 00:03:29.123 "flush": true, 00:03:29.123 "reset": true, 00:03:29.123 "compare": false, 00:03:29.123 "compare_and_write": false, 00:03:29.123 "abort": true, 00:03:29.123 "nvme_admin": false, 00:03:29.123 "nvme_io": false 00:03:29.123 }, 00:03:29.123 "memory_domains": [ 00:03:29.123 { 00:03:29.123 "dma_device_id": "system", 00:03:29.123 "dma_device_type": 1 00:03:29.123 }, 00:03:29.123 { 00:03:29.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.123 "dma_device_type": 2 00:03:29.123 } 00:03:29.123 ], 00:03:29.123 "driver_specific": {} 00:03:29.123 } 00:03:29.123 ]' 00:03:29.123 17:19:55 -- rpc/rpc.sh@32 -- # jq length 00:03:29.123 17:19:55 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:29.123 17:19:55 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:29.123 17:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:29.123 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:29.123 17:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:29.123 17:19:55 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:29.123 17:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:29.123 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:29.123 17:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:29.123 17:19:55 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:29.123 17:19:55 -- rpc/rpc.sh@36 -- # jq length 00:03:29.123 17:19:55 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:29.123 00:03:29.123 real 0m0.114s 00:03:29.123 user 0m0.074s 00:03:29.123 sys 0m0.009s 00:03:29.123 17:19:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:29.123 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:29.123 ************************************ 00:03:29.123 END TEST rpc_plugins 00:03:29.123 ************************************ 00:03:29.123 17:19:55 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:29.123 17:19:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:29.123 17:19:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:29.123 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:29.381 ************************************ 00:03:29.381 START TEST rpc_trace_cmd_test 00:03:29.381 ************************************ 00:03:29.381 17:19:55 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:03:29.381 17:19:55 -- rpc/rpc.sh@40 -- # local info 00:03:29.381 17:19:55 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:29.381 17:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:29.381 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:29.381 17:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:29.381 17:19:55 -- rpc/rpc.sh@42 -- # info='{ 00:03:29.381 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1888626", 00:03:29.381 "tpoint_group_mask": "0x8", 00:03:29.381 "iscsi_conn": { 00:03:29.381 "mask": "0x2", 00:03:29.381 "tpoint_mask": "0x0" 00:03:29.381 }, 00:03:29.381 "scsi": { 00:03:29.381 "mask": "0x4", 00:03:29.381 "tpoint_mask": "0x0" 00:03:29.381 }, 00:03:29.381 "bdev": { 00:03:29.381 "mask": "0x8", 00:03:29.381 "tpoint_mask": "0xffffffffffffffff" 00:03:29.381 }, 00:03:29.381 "nvmf_rdma": { 00:03:29.381 "mask": "0x10", 00:03:29.381 "tpoint_mask": "0x0" 00:03:29.381 }, 00:03:29.381 "nvmf_tcp": { 00:03:29.381 "mask": "0x20", 00:03:29.381 "tpoint_mask": "0x0" 00:03:29.381 }, 00:03:29.381 "ftl": { 00:03:29.381 "mask": "0x40", 00:03:29.381 "tpoint_mask": "0x0" 00:03:29.381 }, 00:03:29.381 "blobfs": { 00:03:29.381 "mask": "0x80", 00:03:29.381 "tpoint_mask": "0x0" 00:03:29.381 }, 00:03:29.381 "dsa": { 00:03:29.381 "mask": "0x200", 00:03:29.381 "tpoint_mask": "0x0" 00:03:29.381 }, 00:03:29.381 "thread": { 00:03:29.381 "mask": "0x400", 00:03:29.381 "tpoint_mask": "0x0" 00:03:29.381 }, 00:03:29.381 "nvme_pcie": { 00:03:29.381 "mask": "0x800", 00:03:29.381 "tpoint_mask": "0x0" 00:03:29.381 }, 00:03:29.381 "iaa": { 00:03:29.381 "mask": "0x1000", 00:03:29.381 "tpoint_mask": "0x0" 00:03:29.381 }, 00:03:29.381 "nvme_tcp": { 00:03:29.381 "mask": "0x2000", 00:03:29.381 "tpoint_mask": "0x0" 00:03:29.381 }, 00:03:29.381 "bdev_nvme": { 00:03:29.381 "mask": "0x4000", 00:03:29.381 "tpoint_mask": "0x0" 00:03:29.381 }, 00:03:29.381 "sock": { 00:03:29.381 "mask": "0x8000", 00:03:29.381 "tpoint_mask": "0x0" 00:03:29.381 } 00:03:29.381 }' 00:03:29.381 17:19:55 -- rpc/rpc.sh@43 -- # jq length 00:03:29.381 17:19:55 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:29.381 17:19:55 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:29.381 17:19:55 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:29.381 17:19:55 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:29.381 17:19:55 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:29.381 17:19:55 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:29.381 17:19:55 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:29.381 17:19:55 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:29.381 17:19:55 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:29.381 00:03:29.381 real 0m0.189s 00:03:29.381 user 0m0.173s 00:03:29.381 sys 0m0.008s 00:03:29.381 17:19:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:29.381 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:29.381 ************************************ 00:03:29.381 END TEST rpc_trace_cmd_test 00:03:29.381 ************************************ 00:03:29.381 17:19:55 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:29.381 17:19:55 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:29.381 17:19:55 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:29.381 17:19:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:29.381 17:19:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:29.381 17:19:55 -- common/autotest_common.sh@10 -- # set +x 00:03:29.641 ************************************ 00:03:29.641 START TEST rpc_daemon_integrity 00:03:29.641 ************************************ 00:03:29.641 17:19:55 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:03:29.641 17:19:56 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:29.641 17:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:29.641 17:19:56 -- common/autotest_common.sh@10 -- # set +x 00:03:29.641 17:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:29.641 17:19:56 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:29.641 17:19:56 -- rpc/rpc.sh@13 -- # jq length 00:03:29.641 17:19:56 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:29.641 17:19:56 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:29.641 17:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:29.641 17:19:56 -- common/autotest_common.sh@10 -- # set +x 00:03:29.641 17:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:29.641 17:19:56 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:29.642 17:19:56 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:29.642 17:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:29.642 17:19:56 -- common/autotest_common.sh@10 -- # set +x 00:03:29.642 17:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:29.642 17:19:56 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:29.642 { 00:03:29.642 "name": "Malloc2", 00:03:29.642 "aliases": [ 00:03:29.642 "34f7a0f7-a85f-44af-a661-0b44e2c02a0d" 00:03:29.642 ], 00:03:29.642 "product_name": "Malloc disk", 00:03:29.642 "block_size": 512, 00:03:29.642 "num_blocks": 16384, 00:03:29.642 "uuid": "34f7a0f7-a85f-44af-a661-0b44e2c02a0d", 00:03:29.642 "assigned_rate_limits": { 00:03:29.642 "rw_ios_per_sec": 0, 00:03:29.642 "rw_mbytes_per_sec": 0, 00:03:29.642 "r_mbytes_per_sec": 0, 00:03:29.642 "w_mbytes_per_sec": 0 00:03:29.642 }, 00:03:29.642 "claimed": false, 00:03:29.642 "zoned": false, 00:03:29.642 "supported_io_types": { 00:03:29.642 "read": true, 00:03:29.642 "write": true, 00:03:29.642 "unmap": true, 00:03:29.642 "write_zeroes": true, 00:03:29.642 "flush": true, 00:03:29.642 "reset": true, 00:03:29.642 "compare": false, 00:03:29.642 "compare_and_write": false, 00:03:29.642 "abort": true, 00:03:29.642 "nvme_admin": false, 00:03:29.642 "nvme_io": false 00:03:29.642 }, 00:03:29.642 "memory_domains": [ 00:03:29.642 { 00:03:29.642 "dma_device_id": "system", 00:03:29.642 "dma_device_type": 1 00:03:29.642 }, 00:03:29.642 { 00:03:29.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.642 "dma_device_type": 2 00:03:29.642 } 00:03:29.642 ], 00:03:29.642 "driver_specific": {} 00:03:29.642 } 00:03:29.642 ]' 00:03:29.642 17:19:56 -- rpc/rpc.sh@17 -- # jq length 00:03:29.642 17:19:56 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:29.642 17:19:56 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:29.642 17:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:29.642 17:19:56 -- common/autotest_common.sh@10 -- # set +x 00:03:29.642 [2024-04-18 17:19:56.105181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:29.642 [2024-04-18 17:19:56.105225] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:29.642 [2024-04-18 17:19:56.105251] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb17100 00:03:29.642 [2024-04-18 17:19:56.105268] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:29.642 [2024-04-18 17:19:56.106702] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:29.642 [2024-04-18 17:19:56.106726] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:29.642 Passthru0 00:03:29.642 17:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:29.642 17:19:56 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:29.642 17:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:29.642 17:19:56 -- common/autotest_common.sh@10 -- # set +x 00:03:29.642 17:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:29.642 17:19:56 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:29.642 { 00:03:29.642 "name": "Malloc2", 00:03:29.642 "aliases": [ 00:03:29.642 "34f7a0f7-a85f-44af-a661-0b44e2c02a0d" 00:03:29.642 ], 00:03:29.642 "product_name": "Malloc disk", 00:03:29.642 "block_size": 512, 00:03:29.642 "num_blocks": 16384, 00:03:29.642 "uuid": "34f7a0f7-a85f-44af-a661-0b44e2c02a0d", 00:03:29.642 "assigned_rate_limits": { 00:03:29.642 "rw_ios_per_sec": 0, 00:03:29.642 "rw_mbytes_per_sec": 0, 00:03:29.642 "r_mbytes_per_sec": 0, 00:03:29.642 "w_mbytes_per_sec": 0 00:03:29.642 }, 00:03:29.642 "claimed": true, 00:03:29.642 "claim_type": "exclusive_write", 00:03:29.642 "zoned": false, 00:03:29.642 "supported_io_types": { 00:03:29.642 "read": true, 00:03:29.642 "write": true, 00:03:29.642 "unmap": true, 00:03:29.642 "write_zeroes": true, 00:03:29.642 "flush": true, 00:03:29.642 "reset": true, 00:03:29.642 "compare": false, 00:03:29.642 "compare_and_write": false, 00:03:29.642 "abort": true, 00:03:29.642 "nvme_admin": false, 00:03:29.642 "nvme_io": false 00:03:29.642 }, 00:03:29.642 "memory_domains": [ 00:03:29.642 { 00:03:29.642 "dma_device_id": "system", 00:03:29.642 "dma_device_type": 1 00:03:29.642 }, 00:03:29.642 { 00:03:29.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.642 "dma_device_type": 2 00:03:29.642 } 00:03:29.642 ], 00:03:29.642 "driver_specific": {} 00:03:29.642 }, 00:03:29.642 { 00:03:29.642 "name": "Passthru0", 00:03:29.642 "aliases": [ 00:03:29.642 "1966386d-f149-5061-ac64-0036722c0f12" 00:03:29.642 ], 00:03:29.642 "product_name": "passthru", 00:03:29.642 "block_size": 512, 00:03:29.642 "num_blocks": 16384, 00:03:29.642 "uuid": "1966386d-f149-5061-ac64-0036722c0f12", 00:03:29.642 "assigned_rate_limits": { 00:03:29.642 "rw_ios_per_sec": 0, 00:03:29.642 "rw_mbytes_per_sec": 0, 00:03:29.642 "r_mbytes_per_sec": 0, 00:03:29.642 "w_mbytes_per_sec": 0 00:03:29.642 }, 00:03:29.642 "claimed": false, 00:03:29.642 "zoned": false, 00:03:29.642 "supported_io_types": { 00:03:29.642 "read": true, 00:03:29.642 "write": true, 00:03:29.642 "unmap": true, 00:03:29.642 "write_zeroes": true, 00:03:29.642 "flush": true, 00:03:29.642 "reset": true, 00:03:29.642 "compare": false, 00:03:29.642 "compare_and_write": false, 00:03:29.642 "abort": true, 00:03:29.642 "nvme_admin": false, 00:03:29.642 "nvme_io": false 00:03:29.642 }, 00:03:29.642 "memory_domains": [ 00:03:29.642 { 00:03:29.642 "dma_device_id": "system", 00:03:29.642 "dma_device_type": 1 00:03:29.642 }, 00:03:29.642 { 00:03:29.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:29.642 "dma_device_type": 2 00:03:29.642 } 00:03:29.642 ], 00:03:29.642 "driver_specific": { 00:03:29.642 "passthru": { 00:03:29.642 "name": "Passthru0", 00:03:29.642 "base_bdev_name": "Malloc2" 00:03:29.642 } 00:03:29.642 } 00:03:29.642 } 00:03:29.642 ]' 00:03:29.642 17:19:56 -- rpc/rpc.sh@21 -- # jq length 00:03:29.642 17:19:56 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:29.642 17:19:56 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:29.642 17:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:29.642 17:19:56 -- common/autotest_common.sh@10 -- # set +x 00:03:29.642 17:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:29.642 17:19:56 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:29.642 17:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:29.642 17:19:56 -- common/autotest_common.sh@10 -- # set +x 00:03:29.642 17:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:29.642 17:19:56 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:29.642 17:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:29.642 17:19:56 -- common/autotest_common.sh@10 -- # set +x 00:03:29.903 17:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:29.903 17:19:56 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:29.903 17:19:56 -- rpc/rpc.sh@26 -- # jq length 00:03:29.903 17:19:56 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:29.903 00:03:29.903 real 0m0.222s 00:03:29.903 user 0m0.149s 00:03:29.903 sys 0m0.020s 00:03:29.903 17:19:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:29.903 17:19:56 -- common/autotest_common.sh@10 -- # set +x 00:03:29.903 ************************************ 00:03:29.903 END TEST rpc_daemon_integrity 00:03:29.903 ************************************ 00:03:29.903 17:19:56 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:29.903 17:19:56 -- rpc/rpc.sh@84 -- # killprocess 1888626 00:03:29.903 17:19:56 -- common/autotest_common.sh@936 -- # '[' -z 1888626 ']' 00:03:29.903 17:19:56 -- common/autotest_common.sh@940 -- # kill -0 1888626 00:03:29.903 17:19:56 -- common/autotest_common.sh@941 -- # uname 00:03:29.903 17:19:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:29.903 17:19:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1888626 00:03:29.903 17:19:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:29.903 17:19:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:29.903 17:19:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1888626' 00:03:29.903 killing process with pid 1888626 00:03:29.903 17:19:56 -- common/autotest_common.sh@955 -- # kill 1888626 00:03:29.903 17:19:56 -- common/autotest_common.sh@960 -- # wait 1888626 00:03:30.471 00:03:30.471 real 0m2.249s 00:03:30.471 user 0m2.846s 00:03:30.471 sys 0m0.705s 00:03:30.471 17:19:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:30.471 17:19:56 -- common/autotest_common.sh@10 -- # set +x 00:03:30.471 ************************************ 00:03:30.471 END TEST rpc 00:03:30.471 ************************************ 00:03:30.471 17:19:56 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:30.471 17:19:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.471 17:19:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.471 17:19:56 -- common/autotest_common.sh@10 -- # set +x 00:03:30.471 ************************************ 00:03:30.471 START TEST skip_rpc 00:03:30.471 ************************************ 00:03:30.471 17:19:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:30.471 * Looking for test storage... 00:03:30.471 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:30.471 17:19:56 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:30.471 17:19:56 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:30.471 17:19:56 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:30.471 17:19:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.471 17:19:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.471 17:19:56 -- common/autotest_common.sh@10 -- # set +x 00:03:30.471 ************************************ 00:03:30.471 START TEST skip_rpc 00:03:30.471 ************************************ 00:03:30.471 17:19:57 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:03:30.471 17:19:57 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1889110 00:03:30.471 17:19:57 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:30.471 17:19:57 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:30.471 17:19:57 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:30.731 [2024-04-18 17:19:57.050310] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:03:30.731 [2024-04-18 17:19:57.050404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889110 ] 00:03:30.731 EAL: No free 2048 kB hugepages reported on node 1 00:03:30.731 [2024-04-18 17:19:57.107943] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:30.731 [2024-04-18 17:19:57.223098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.052 17:20:02 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:36.052 17:20:02 -- common/autotest_common.sh@638 -- # local es=0 00:03:36.052 17:20:02 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:36.052 17:20:02 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:03:36.052 17:20:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:36.052 17:20:02 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:03:36.052 17:20:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:36.052 17:20:02 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:03:36.052 17:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:36.052 17:20:02 -- common/autotest_common.sh@10 -- # set +x 00:03:36.052 17:20:02 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:03:36.052 17:20:02 -- common/autotest_common.sh@641 -- # es=1 00:03:36.052 17:20:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:36.052 17:20:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:03:36.052 17:20:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:36.052 17:20:02 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:36.052 17:20:02 -- rpc/skip_rpc.sh@23 -- # killprocess 1889110 00:03:36.052 17:20:02 -- common/autotest_common.sh@936 -- # '[' -z 1889110 ']' 00:03:36.052 17:20:02 -- common/autotest_common.sh@940 -- # kill -0 1889110 00:03:36.052 17:20:02 -- common/autotest_common.sh@941 -- # uname 00:03:36.052 17:20:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:36.052 17:20:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1889110 00:03:36.052 17:20:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:36.052 17:20:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:36.052 17:20:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1889110' 00:03:36.052 killing process with pid 1889110 00:03:36.052 17:20:02 -- common/autotest_common.sh@955 -- # kill 1889110 00:03:36.052 17:20:02 -- common/autotest_common.sh@960 -- # wait 1889110 00:03:36.052 00:03:36.052 real 0m5.515s 00:03:36.052 user 0m5.199s 00:03:36.052 sys 0m0.322s 00:03:36.052 17:20:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:36.052 17:20:02 -- common/autotest_common.sh@10 -- # set +x 00:03:36.052 ************************************ 00:03:36.052 END TEST skip_rpc 00:03:36.052 ************************************ 00:03:36.052 17:20:02 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:36.052 17:20:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:36.052 17:20:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:36.052 17:20:02 -- common/autotest_common.sh@10 -- # set +x 00:03:36.310 ************************************ 00:03:36.310 START TEST skip_rpc_with_json 00:03:36.310 ************************************ 00:03:36.310 17:20:02 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:03:36.310 17:20:02 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:36.310 17:20:02 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1889810 00:03:36.310 17:20:02 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:36.310 17:20:02 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:36.310 17:20:02 -- rpc/skip_rpc.sh@31 -- # waitforlisten 1889810 00:03:36.310 17:20:02 -- common/autotest_common.sh@817 -- # '[' -z 1889810 ']' 00:03:36.310 17:20:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:36.310 17:20:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:36.310 17:20:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:36.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:36.310 17:20:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:36.310 17:20:02 -- common/autotest_common.sh@10 -- # set +x 00:03:36.310 [2024-04-18 17:20:02.684185] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:03:36.310 [2024-04-18 17:20:02.684290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889810 ] 00:03:36.310 EAL: No free 2048 kB hugepages reported on node 1 00:03:36.310 [2024-04-18 17:20:02.747226] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.568 [2024-04-18 17:20:02.858934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.136 17:20:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:37.136 17:20:03 -- common/autotest_common.sh@850 -- # return 0 00:03:37.136 17:20:03 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:37.136 17:20:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:37.136 17:20:03 -- common/autotest_common.sh@10 -- # set +x 00:03:37.136 [2024-04-18 17:20:03.608722] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:37.136 request: 00:03:37.136 { 00:03:37.136 "trtype": "tcp", 00:03:37.136 "method": "nvmf_get_transports", 00:03:37.136 "req_id": 1 00:03:37.136 } 00:03:37.136 Got JSON-RPC error response 00:03:37.136 response: 00:03:37.136 { 00:03:37.136 "code": -19, 00:03:37.136 "message": "No such device" 00:03:37.136 } 00:03:37.136 17:20:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:03:37.136 17:20:03 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:37.136 17:20:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:37.136 17:20:03 -- common/autotest_common.sh@10 -- # set +x 00:03:37.136 [2024-04-18 17:20:03.616843] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:37.136 17:20:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:37.136 17:20:03 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:37.136 17:20:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:37.136 17:20:03 -- common/autotest_common.sh@10 -- # set +x 00:03:37.393 17:20:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:37.393 17:20:03 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:37.393 { 00:03:37.393 "subsystems": [ 00:03:37.393 { 00:03:37.393 "subsystem": "keyring", 00:03:37.393 "config": [] 00:03:37.393 }, 00:03:37.393 { 00:03:37.393 "subsystem": "iobuf", 00:03:37.393 "config": [ 00:03:37.393 { 00:03:37.393 "method": "iobuf_set_options", 00:03:37.393 "params": { 00:03:37.393 "small_pool_count": 8192, 00:03:37.393 "large_pool_count": 1024, 00:03:37.393 "small_bufsize": 8192, 00:03:37.393 "large_bufsize": 135168 00:03:37.393 } 00:03:37.393 } 00:03:37.393 ] 00:03:37.393 }, 00:03:37.393 { 00:03:37.393 "subsystem": "sock", 00:03:37.393 "config": [ 00:03:37.393 { 00:03:37.393 "method": "sock_impl_set_options", 00:03:37.393 "params": { 00:03:37.393 "impl_name": "posix", 00:03:37.393 "recv_buf_size": 2097152, 00:03:37.393 "send_buf_size": 2097152, 00:03:37.393 "enable_recv_pipe": true, 00:03:37.393 "enable_quickack": false, 00:03:37.393 "enable_placement_id": 0, 00:03:37.393 "enable_zerocopy_send_server": true, 00:03:37.393 "enable_zerocopy_send_client": false, 00:03:37.393 "zerocopy_threshold": 0, 00:03:37.394 "tls_version": 0, 00:03:37.394 "enable_ktls": false 00:03:37.394 } 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "method": "sock_impl_set_options", 00:03:37.394 "params": { 00:03:37.394 "impl_name": "ssl", 00:03:37.394 "recv_buf_size": 4096, 00:03:37.394 "send_buf_size": 4096, 00:03:37.394 "enable_recv_pipe": true, 00:03:37.394 "enable_quickack": false, 00:03:37.394 "enable_placement_id": 0, 00:03:37.394 "enable_zerocopy_send_server": true, 00:03:37.394 "enable_zerocopy_send_client": false, 00:03:37.394 "zerocopy_threshold": 0, 00:03:37.394 "tls_version": 0, 00:03:37.394 "enable_ktls": false 00:03:37.394 } 00:03:37.394 } 00:03:37.394 ] 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "subsystem": "vmd", 00:03:37.394 "config": [] 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "subsystem": "accel", 00:03:37.394 "config": [ 00:03:37.394 { 00:03:37.394 "method": "accel_set_options", 00:03:37.394 "params": { 00:03:37.394 "small_cache_size": 128, 00:03:37.394 "large_cache_size": 16, 00:03:37.394 "task_count": 2048, 00:03:37.394 "sequence_count": 2048, 00:03:37.394 "buf_count": 2048 00:03:37.394 } 00:03:37.394 } 00:03:37.394 ] 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "subsystem": "bdev", 00:03:37.394 "config": [ 00:03:37.394 { 00:03:37.394 "method": "bdev_set_options", 00:03:37.394 "params": { 00:03:37.394 "bdev_io_pool_size": 65535, 00:03:37.394 "bdev_io_cache_size": 256, 00:03:37.394 "bdev_auto_examine": true, 00:03:37.394 "iobuf_small_cache_size": 128, 00:03:37.394 "iobuf_large_cache_size": 16 00:03:37.394 } 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "method": "bdev_raid_set_options", 00:03:37.394 "params": { 00:03:37.394 "process_window_size_kb": 1024 00:03:37.394 } 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "method": "bdev_iscsi_set_options", 00:03:37.394 "params": { 00:03:37.394 "timeout_sec": 30 00:03:37.394 } 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "method": "bdev_nvme_set_options", 00:03:37.394 "params": { 00:03:37.394 "action_on_timeout": "none", 00:03:37.394 "timeout_us": 0, 00:03:37.394 "timeout_admin_us": 0, 00:03:37.394 "keep_alive_timeout_ms": 10000, 00:03:37.394 "arbitration_burst": 0, 00:03:37.394 "low_priority_weight": 0, 00:03:37.394 "medium_priority_weight": 0, 00:03:37.394 "high_priority_weight": 0, 00:03:37.394 "nvme_adminq_poll_period_us": 10000, 00:03:37.394 "nvme_ioq_poll_period_us": 0, 00:03:37.394 "io_queue_requests": 0, 00:03:37.394 "delay_cmd_submit": true, 00:03:37.394 "transport_retry_count": 4, 00:03:37.394 "bdev_retry_count": 3, 00:03:37.394 "transport_ack_timeout": 0, 00:03:37.394 "ctrlr_loss_timeout_sec": 0, 00:03:37.394 "reconnect_delay_sec": 0, 00:03:37.394 "fast_io_fail_timeout_sec": 0, 00:03:37.394 "disable_auto_failback": false, 00:03:37.394 "generate_uuids": false, 00:03:37.394 "transport_tos": 0, 00:03:37.394 "nvme_error_stat": false, 00:03:37.394 "rdma_srq_size": 0, 00:03:37.394 "io_path_stat": false, 00:03:37.394 "allow_accel_sequence": false, 00:03:37.394 "rdma_max_cq_size": 0, 00:03:37.394 "rdma_cm_event_timeout_ms": 0, 00:03:37.394 "dhchap_digests": [ 00:03:37.394 "sha256", 00:03:37.394 "sha384", 00:03:37.394 "sha512" 00:03:37.394 ], 00:03:37.394 "dhchap_dhgroups": [ 00:03:37.394 "null", 00:03:37.394 "ffdhe2048", 00:03:37.394 "ffdhe3072", 00:03:37.394 "ffdhe4096", 00:03:37.394 "ffdhe6144", 00:03:37.394 "ffdhe8192" 00:03:37.394 ] 00:03:37.394 } 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "method": "bdev_nvme_set_hotplug", 00:03:37.394 "params": { 00:03:37.394 "period_us": 100000, 00:03:37.394 "enable": false 00:03:37.394 } 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "method": "bdev_wait_for_examine" 00:03:37.394 } 00:03:37.394 ] 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "subsystem": "scsi", 00:03:37.394 "config": null 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "subsystem": "scheduler", 00:03:37.394 "config": [ 00:03:37.394 { 00:03:37.394 "method": "framework_set_scheduler", 00:03:37.394 "params": { 00:03:37.394 "name": "static" 00:03:37.394 } 00:03:37.394 } 00:03:37.394 ] 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "subsystem": "vhost_scsi", 00:03:37.394 "config": [] 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "subsystem": "vhost_blk", 00:03:37.394 "config": [] 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "subsystem": "ublk", 00:03:37.394 "config": [] 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "subsystem": "nbd", 00:03:37.394 "config": [] 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "subsystem": "nvmf", 00:03:37.394 "config": [ 00:03:37.394 { 00:03:37.394 "method": "nvmf_set_config", 00:03:37.394 "params": { 00:03:37.394 "discovery_filter": "match_any", 00:03:37.394 "admin_cmd_passthru": { 00:03:37.394 "identify_ctrlr": false 00:03:37.394 } 00:03:37.394 } 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "method": "nvmf_set_max_subsystems", 00:03:37.394 "params": { 00:03:37.394 "max_subsystems": 1024 00:03:37.394 } 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "method": "nvmf_set_crdt", 00:03:37.394 "params": { 00:03:37.394 "crdt1": 0, 00:03:37.394 "crdt2": 0, 00:03:37.394 "crdt3": 0 00:03:37.394 } 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "method": "nvmf_create_transport", 00:03:37.394 "params": { 00:03:37.394 "trtype": "TCP", 00:03:37.394 "max_queue_depth": 128, 00:03:37.394 "max_io_qpairs_per_ctrlr": 127, 00:03:37.394 "in_capsule_data_size": 4096, 00:03:37.394 "max_io_size": 131072, 00:03:37.394 "io_unit_size": 131072, 00:03:37.394 "max_aq_depth": 128, 00:03:37.394 "num_shared_buffers": 511, 00:03:37.394 "buf_cache_size": 4294967295, 00:03:37.394 "dif_insert_or_strip": false, 00:03:37.394 "zcopy": false, 00:03:37.394 "c2h_success": true, 00:03:37.394 "sock_priority": 0, 00:03:37.394 "abort_timeout_sec": 1, 00:03:37.394 "ack_timeout": 0 00:03:37.394 } 00:03:37.394 } 00:03:37.394 ] 00:03:37.394 }, 00:03:37.394 { 00:03:37.394 "subsystem": "iscsi", 00:03:37.394 "config": [ 00:03:37.394 { 00:03:37.394 "method": "iscsi_set_options", 00:03:37.394 "params": { 00:03:37.394 "node_base": "iqn.2016-06.io.spdk", 00:03:37.394 "max_sessions": 128, 00:03:37.394 "max_connections_per_session": 2, 00:03:37.394 "max_queue_depth": 64, 00:03:37.394 "default_time2wait": 2, 00:03:37.394 "default_time2retain": 20, 00:03:37.394 "first_burst_length": 8192, 00:03:37.394 "immediate_data": true, 00:03:37.394 "allow_duplicated_isid": false, 00:03:37.394 "error_recovery_level": 0, 00:03:37.394 "nop_timeout": 60, 00:03:37.394 "nop_in_interval": 30, 00:03:37.394 "disable_chap": false, 00:03:37.394 "require_chap": false, 00:03:37.394 "mutual_chap": false, 00:03:37.394 "chap_group": 0, 00:03:37.394 "max_large_datain_per_connection": 64, 00:03:37.394 "max_r2t_per_connection": 4, 00:03:37.394 "pdu_pool_size": 36864, 00:03:37.394 "immediate_data_pool_size": 16384, 00:03:37.394 "data_out_pool_size": 2048 00:03:37.394 } 00:03:37.394 } 00:03:37.394 ] 00:03:37.394 } 00:03:37.394 ] 00:03:37.394 } 00:03:37.394 17:20:03 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:37.394 17:20:03 -- rpc/skip_rpc.sh@40 -- # killprocess 1889810 00:03:37.394 17:20:03 -- common/autotest_common.sh@936 -- # '[' -z 1889810 ']' 00:03:37.394 17:20:03 -- common/autotest_common.sh@940 -- # kill -0 1889810 00:03:37.395 17:20:03 -- common/autotest_common.sh@941 -- # uname 00:03:37.395 17:20:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:37.395 17:20:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1889810 00:03:37.395 17:20:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:37.395 17:20:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:37.395 17:20:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1889810' 00:03:37.395 killing process with pid 1889810 00:03:37.395 17:20:03 -- common/autotest_common.sh@955 -- # kill 1889810 00:03:37.395 17:20:03 -- common/autotest_common.sh@960 -- # wait 1889810 00:03:37.963 17:20:04 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1890082 00:03:37.963 17:20:04 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:37.963 17:20:04 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:43.239 17:20:09 -- rpc/skip_rpc.sh@50 -- # killprocess 1890082 00:03:43.239 17:20:09 -- common/autotest_common.sh@936 -- # '[' -z 1890082 ']' 00:03:43.239 17:20:09 -- common/autotest_common.sh@940 -- # kill -0 1890082 00:03:43.239 17:20:09 -- common/autotest_common.sh@941 -- # uname 00:03:43.239 17:20:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:43.239 17:20:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1890082 00:03:43.239 17:20:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:43.239 17:20:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:43.239 17:20:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1890082' 00:03:43.239 killing process with pid 1890082 00:03:43.239 17:20:09 -- common/autotest_common.sh@955 -- # kill 1890082 00:03:43.239 17:20:09 -- common/autotest_common.sh@960 -- # wait 1890082 00:03:43.239 17:20:09 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:43.239 17:20:09 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:43.239 00:03:43.239 real 0m7.089s 00:03:43.239 user 0m6.871s 00:03:43.239 sys 0m0.708s 00:03:43.239 17:20:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:43.239 17:20:09 -- common/autotest_common.sh@10 -- # set +x 00:03:43.239 ************************************ 00:03:43.239 END TEST skip_rpc_with_json 00:03:43.239 ************************************ 00:03:43.239 17:20:09 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:43.239 17:20:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.239 17:20:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.239 17:20:09 -- common/autotest_common.sh@10 -- # set +x 00:03:43.498 ************************************ 00:03:43.498 START TEST skip_rpc_with_delay 00:03:43.498 ************************************ 00:03:43.498 17:20:09 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:03:43.498 17:20:09 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:43.498 17:20:09 -- common/autotest_common.sh@638 -- # local es=0 00:03:43.498 17:20:09 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:43.498 17:20:09 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.498 17:20:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:43.498 17:20:09 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.498 17:20:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:43.498 17:20:09 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.498 17:20:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:43.498 17:20:09 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.498 17:20:09 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:43.498 17:20:09 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:43.498 [2024-04-18 17:20:09.899661] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:43.498 [2024-04-18 17:20:09.899787] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:43.498 17:20:09 -- common/autotest_common.sh@641 -- # es=1 00:03:43.498 17:20:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:43.498 17:20:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:03:43.498 17:20:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:43.498 00:03:43.498 real 0m0.065s 00:03:43.498 user 0m0.041s 00:03:43.498 sys 0m0.024s 00:03:43.498 17:20:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:43.498 17:20:09 -- common/autotest_common.sh@10 -- # set +x 00:03:43.498 ************************************ 00:03:43.498 END TEST skip_rpc_with_delay 00:03:43.498 ************************************ 00:03:43.498 17:20:09 -- rpc/skip_rpc.sh@77 -- # uname 00:03:43.498 17:20:09 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:43.498 17:20:09 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:43.498 17:20:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.498 17:20:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.498 17:20:09 -- common/autotest_common.sh@10 -- # set +x 00:03:43.759 ************************************ 00:03:43.759 START TEST exit_on_failed_rpc_init 00:03:43.759 ************************************ 00:03:43.759 17:20:10 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:03:43.759 17:20:10 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1890807 00:03:43.759 17:20:10 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:43.759 17:20:10 -- rpc/skip_rpc.sh@63 -- # waitforlisten 1890807 00:03:43.759 17:20:10 -- common/autotest_common.sh@817 -- # '[' -z 1890807 ']' 00:03:43.759 17:20:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:43.759 17:20:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:43.759 17:20:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:43.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:43.759 17:20:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:43.759 17:20:10 -- common/autotest_common.sh@10 -- # set +x 00:03:43.759 [2024-04-18 17:20:10.091371] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:03:43.759 [2024-04-18 17:20:10.091469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1890807 ] 00:03:43.759 EAL: No free 2048 kB hugepages reported on node 1 00:03:43.759 [2024-04-18 17:20:10.165510] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:44.018 [2024-04-18 17:20:10.308723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:44.278 17:20:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:44.278 17:20:10 -- common/autotest_common.sh@850 -- # return 0 00:03:44.278 17:20:10 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:44.278 17:20:10 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:44.278 17:20:10 -- common/autotest_common.sh@638 -- # local es=0 00:03:44.278 17:20:10 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:44.278 17:20:10 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:44.278 17:20:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:44.278 17:20:10 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:44.278 17:20:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:44.278 17:20:10 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:44.278 17:20:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:44.278 17:20:10 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:44.278 17:20:10 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:44.278 17:20:10 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:44.278 [2024-04-18 17:20:10.635849] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:03:44.278 [2024-04-18 17:20:10.635930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1890824 ] 00:03:44.278 EAL: No free 2048 kB hugepages reported on node 1 00:03:44.278 [2024-04-18 17:20:10.691594] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:44.278 [2024-04-18 17:20:10.796995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:44.278 [2024-04-18 17:20:10.797111] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:44.278 [2024-04-18 17:20:10.797129] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:44.278 [2024-04-18 17:20:10.797141] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:44.538 17:20:10 -- common/autotest_common.sh@641 -- # es=234 00:03:44.538 17:20:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:44.538 17:20:10 -- common/autotest_common.sh@650 -- # es=106 00:03:44.538 17:20:10 -- common/autotest_common.sh@651 -- # case "$es" in 00:03:44.538 17:20:10 -- common/autotest_common.sh@658 -- # es=1 00:03:44.538 17:20:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:44.538 17:20:10 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:44.538 17:20:10 -- rpc/skip_rpc.sh@70 -- # killprocess 1890807 00:03:44.538 17:20:10 -- common/autotest_common.sh@936 -- # '[' -z 1890807 ']' 00:03:44.538 17:20:10 -- common/autotest_common.sh@940 -- # kill -0 1890807 00:03:44.538 17:20:10 -- common/autotest_common.sh@941 -- # uname 00:03:44.538 17:20:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:44.538 17:20:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1890807 00:03:44.538 17:20:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:44.538 17:20:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:44.538 17:20:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1890807' 00:03:44.538 killing process with pid 1890807 00:03:44.538 17:20:10 -- common/autotest_common.sh@955 -- # kill 1890807 00:03:44.538 17:20:10 -- common/autotest_common.sh@960 -- # wait 1890807 00:03:45.107 00:03:45.107 real 0m1.359s 00:03:45.107 user 0m1.596s 00:03:45.107 sys 0m0.466s 00:03:45.107 17:20:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:45.108 17:20:11 -- common/autotest_common.sh@10 -- # set +x 00:03:45.108 ************************************ 00:03:45.108 END TEST exit_on_failed_rpc_init 00:03:45.108 ************************************ 00:03:45.108 17:20:11 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:45.108 00:03:45.108 real 0m14.569s 00:03:45.108 user 0m13.931s 00:03:45.108 sys 0m1.806s 00:03:45.108 17:20:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:45.108 17:20:11 -- common/autotest_common.sh@10 -- # set +x 00:03:45.108 ************************************ 00:03:45.108 END TEST skip_rpc 00:03:45.108 ************************************ 00:03:45.108 17:20:11 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:45.108 17:20:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:45.108 17:20:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:45.108 17:20:11 -- common/autotest_common.sh@10 -- # set +x 00:03:45.108 ************************************ 00:03:45.108 START TEST rpc_client 00:03:45.108 ************************************ 00:03:45.108 17:20:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:45.108 * Looking for test storage... 00:03:45.108 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:03:45.108 17:20:11 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:45.108 OK 00:03:45.108 17:20:11 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:45.108 00:03:45.108 real 0m0.070s 00:03:45.108 user 0m0.032s 00:03:45.108 sys 0m0.043s 00:03:45.108 17:20:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:45.108 17:20:11 -- common/autotest_common.sh@10 -- # set +x 00:03:45.108 ************************************ 00:03:45.108 END TEST rpc_client 00:03:45.108 ************************************ 00:03:45.108 17:20:11 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:03:45.108 17:20:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:45.108 17:20:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:45.108 17:20:11 -- common/autotest_common.sh@10 -- # set +x 00:03:45.367 ************************************ 00:03:45.367 START TEST json_config 00:03:45.367 ************************************ 00:03:45.367 17:20:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:03:45.367 17:20:11 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:45.367 17:20:11 -- nvmf/common.sh@7 -- # uname -s 00:03:45.367 17:20:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:45.367 17:20:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:45.367 17:20:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:45.367 17:20:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:45.367 17:20:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:45.367 17:20:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:45.367 17:20:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:45.367 17:20:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:45.367 17:20:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:45.367 17:20:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:45.367 17:20:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:45.367 17:20:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:45.367 17:20:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:45.367 17:20:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:45.367 17:20:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:45.367 17:20:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:45.367 17:20:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:45.367 17:20:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:45.367 17:20:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:45.367 17:20:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:45.367 17:20:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.367 17:20:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.367 17:20:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.367 17:20:11 -- paths/export.sh@5 -- # export PATH 00:03:45.367 17:20:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.367 17:20:11 -- nvmf/common.sh@47 -- # : 0 00:03:45.367 17:20:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:45.367 17:20:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:45.367 17:20:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:45.367 17:20:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:45.367 17:20:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:45.367 17:20:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:45.367 17:20:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:45.367 17:20:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:45.367 17:20:11 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:03:45.367 17:20:11 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:45.367 17:20:11 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:45.367 17:20:11 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:45.367 17:20:11 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:45.367 17:20:11 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:45.367 17:20:11 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:45.367 17:20:11 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:45.367 17:20:11 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:45.367 17:20:11 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:45.367 17:20:11 -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:45.367 17:20:11 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:03:45.367 17:20:11 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:45.367 17:20:11 -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:45.367 17:20:11 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:45.367 17:20:11 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:45.367 INFO: JSON configuration test init 00:03:45.367 17:20:11 -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:45.367 17:20:11 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:45.367 17:20:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:45.367 17:20:11 -- common/autotest_common.sh@10 -- # set +x 00:03:45.367 17:20:11 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:45.367 17:20:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:45.367 17:20:11 -- common/autotest_common.sh@10 -- # set +x 00:03:45.367 17:20:11 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:45.367 17:20:11 -- json_config/common.sh@9 -- # local app=target 00:03:45.367 17:20:11 -- json_config/common.sh@10 -- # shift 00:03:45.367 17:20:11 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:45.367 17:20:11 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:45.367 17:20:11 -- json_config/common.sh@15 -- # local app_extra_params= 00:03:45.367 17:20:11 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:45.367 17:20:11 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:45.367 17:20:11 -- json_config/common.sh@22 -- # app_pid["$app"]=1891083 00:03:45.367 17:20:11 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:45.367 17:20:11 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:45.367 Waiting for target to run... 00:03:45.367 17:20:11 -- json_config/common.sh@25 -- # waitforlisten 1891083 /var/tmp/spdk_tgt.sock 00:03:45.367 17:20:11 -- common/autotest_common.sh@817 -- # '[' -z 1891083 ']' 00:03:45.367 17:20:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:45.367 17:20:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:45.367 17:20:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:45.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:45.367 17:20:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:45.367 17:20:11 -- common/autotest_common.sh@10 -- # set +x 00:03:45.367 [2024-04-18 17:20:11.846924] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:03:45.367 [2024-04-18 17:20:11.847008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1891083 ] 00:03:45.367 EAL: No free 2048 kB hugepages reported on node 1 00:03:45.935 [2024-04-18 17:20:12.188000] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.935 [2024-04-18 17:20:12.275602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.503 17:20:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:46.503 17:20:12 -- common/autotest_common.sh@850 -- # return 0 00:03:46.503 17:20:12 -- json_config/common.sh@26 -- # echo '' 00:03:46.503 00:03:46.503 17:20:12 -- json_config/json_config.sh@269 -- # create_accel_config 00:03:46.503 17:20:12 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:46.503 17:20:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:46.503 17:20:12 -- common/autotest_common.sh@10 -- # set +x 00:03:46.503 17:20:12 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:46.503 17:20:12 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:46.503 17:20:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:46.503 17:20:12 -- common/autotest_common.sh@10 -- # set +x 00:03:46.503 17:20:12 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:46.503 17:20:12 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:46.503 17:20:12 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:49.804 17:20:15 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:49.804 17:20:15 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:49.804 17:20:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:49.804 17:20:15 -- common/autotest_common.sh@10 -- # set +x 00:03:49.804 17:20:15 -- json_config/json_config.sh@45 -- # local ret=0 00:03:49.804 17:20:15 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:49.804 17:20:15 -- json_config/json_config.sh@46 -- # local enabled_types 00:03:49.804 17:20:15 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:49.804 17:20:15 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:49.804 17:20:15 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:49.804 17:20:16 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:49.804 17:20:16 -- json_config/json_config.sh@48 -- # local get_types 00:03:49.804 17:20:16 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:49.804 17:20:16 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:49.804 17:20:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:49.804 17:20:16 -- common/autotest_common.sh@10 -- # set +x 00:03:49.804 17:20:16 -- json_config/json_config.sh@55 -- # return 0 00:03:49.804 17:20:16 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:49.804 17:20:16 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:49.804 17:20:16 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:49.804 17:20:16 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:49.804 17:20:16 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:49.804 17:20:16 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:49.804 17:20:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:49.804 17:20:16 -- common/autotest_common.sh@10 -- # set +x 00:03:49.804 17:20:16 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:49.804 17:20:16 -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:03:49.804 17:20:16 -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:03:49.804 17:20:16 -- json_config/json_config.sh@234 -- # nvmftestinit 00:03:49.804 17:20:16 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:03:49.804 17:20:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:03:49.804 17:20:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:03:49.804 17:20:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:03:49.804 17:20:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:03:49.804 17:20:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:03:49.804 17:20:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:03:49.804 17:20:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:03:49.804 17:20:16 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:03:49.804 17:20:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:03:49.804 17:20:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:03:49.804 17:20:16 -- common/autotest_common.sh@10 -- # set +x 00:03:51.708 17:20:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:03:51.708 17:20:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:03:51.708 17:20:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:03:51.708 17:20:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:03:51.708 17:20:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:03:51.708 17:20:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:03:51.708 17:20:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:03:51.708 17:20:18 -- nvmf/common.sh@295 -- # net_devs=() 00:03:51.708 17:20:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:03:51.708 17:20:18 -- nvmf/common.sh@296 -- # e810=() 00:03:51.708 17:20:18 -- nvmf/common.sh@296 -- # local -ga e810 00:03:51.708 17:20:18 -- nvmf/common.sh@297 -- # x722=() 00:03:51.708 17:20:18 -- nvmf/common.sh@297 -- # local -ga x722 00:03:51.708 17:20:18 -- nvmf/common.sh@298 -- # mlx=() 00:03:51.708 17:20:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:03:51.708 17:20:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:03:51.708 17:20:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:03:51.708 17:20:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:03:51.708 17:20:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:03:51.708 17:20:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:03:51.708 17:20:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:03:51.708 17:20:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:03:51.708 17:20:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:03:51.708 17:20:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:03:51.708 17:20:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:03:51.708 17:20:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:03:51.708 17:20:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:03:51.708 17:20:18 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:03:51.708 17:20:18 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:03:51.708 17:20:18 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:03:51.708 17:20:18 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:03:51.708 17:20:18 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:03:51.708 17:20:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:03:51.708 17:20:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:03:51.708 17:20:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:03:51.708 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:03:51.708 17:20:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:03:51.708 17:20:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:03:51.708 17:20:18 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:03:51.708 17:20:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:03:51.708 17:20:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:03:51.708 17:20:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:03:51.708 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:03:51.708 17:20:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:03:51.708 17:20:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:03:51.708 17:20:18 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:03:51.708 17:20:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:03:51.708 17:20:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:03:51.708 17:20:18 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:03:51.708 17:20:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:03:51.708 17:20:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:03:51.708 17:20:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:03:51.708 17:20:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:03:51.708 17:20:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:03:51.708 Found net devices under 0000:09:00.0: mlx_0_0 00:03:51.708 17:20:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:03:51.708 17:20:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:03:51.708 17:20:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:03:51.708 17:20:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:03:51.708 17:20:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:03:51.708 17:20:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:03:51.708 Found net devices under 0000:09:00.1: mlx_0_1 00:03:51.708 17:20:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:03:51.708 17:20:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:03:51.708 17:20:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:03:51.708 17:20:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:03:51.708 17:20:18 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:03:51.708 17:20:18 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:03:51.708 17:20:18 -- nvmf/common.sh@409 -- # rdma_device_init 00:03:51.708 17:20:18 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:03:51.708 17:20:18 -- nvmf/common.sh@58 -- # uname 00:03:51.708 17:20:18 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:03:51.708 17:20:18 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:03:51.708 17:20:18 -- nvmf/common.sh@63 -- # modprobe ib_core 00:03:51.708 17:20:18 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:03:51.708 17:20:18 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:03:51.708 17:20:18 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:03:51.968 17:20:18 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:03:51.968 17:20:18 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:03:51.968 17:20:18 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:03:51.968 17:20:18 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:03:51.968 17:20:18 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:03:51.968 17:20:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:03:51.968 17:20:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:03:51.968 17:20:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:03:51.968 17:20:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:03:51.968 17:20:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:03:51.968 17:20:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:03:51.968 17:20:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:51.968 17:20:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:03:51.968 17:20:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:03:51.968 17:20:18 -- nvmf/common.sh@105 -- # continue 2 00:03:51.968 17:20:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:03:51.968 17:20:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:51.968 17:20:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:03:51.968 17:20:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:51.968 17:20:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:03:51.968 17:20:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:03:51.968 17:20:18 -- nvmf/common.sh@105 -- # continue 2 00:03:51.968 17:20:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:03:51.968 17:20:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:03:51.968 17:20:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:03:51.968 17:20:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:03:51.968 17:20:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:03:51.968 17:20:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:03:51.968 17:20:18 -- nvmf/common.sh@74 -- # ip= 00:03:51.968 17:20:18 -- nvmf/common.sh@75 -- # [[ -z '' ]] 00:03:51.968 17:20:18 -- nvmf/common.sh@76 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:03:51.968 17:20:18 -- nvmf/common.sh@77 -- # ip link set mlx_0_0 up 00:03:51.968 17:20:18 -- nvmf/common.sh@78 -- # (( count = count + 1 )) 00:03:51.968 17:20:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:03:51.968 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:03:51.968 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:03:51.968 altname enp9s0f0np0 00:03:51.968 inet 192.168.100.8/24 scope global mlx_0_0 00:03:51.968 valid_lft forever preferred_lft forever 00:03:51.968 17:20:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:03:51.968 17:20:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:03:51.968 17:20:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:03:51.968 17:20:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:03:51.968 17:20:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:03:51.968 17:20:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:03:51.968 17:20:18 -- nvmf/common.sh@74 -- # ip= 00:03:51.968 17:20:18 -- nvmf/common.sh@75 -- # [[ -z '' ]] 00:03:51.968 17:20:18 -- nvmf/common.sh@76 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:03:51.968 17:20:18 -- nvmf/common.sh@77 -- # ip link set mlx_0_1 up 00:03:51.968 17:20:18 -- nvmf/common.sh@78 -- # (( count = count + 1 )) 00:03:51.968 17:20:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:03:51.968 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:03:51.968 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:03:51.968 altname enp9s0f1np1 00:03:51.968 inet 192.168.100.9/24 scope global mlx_0_1 00:03:51.968 valid_lft forever preferred_lft forever 00:03:51.968 17:20:18 -- nvmf/common.sh@411 -- # return 0 00:03:51.968 17:20:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:03:51.968 17:20:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:03:51.968 17:20:18 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:03:51.968 17:20:18 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:03:51.968 17:20:18 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:03:51.969 17:20:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:03:51.969 17:20:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:03:51.969 17:20:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:03:51.969 17:20:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:03:51.969 17:20:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:03:51.969 17:20:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:03:51.969 17:20:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:51.969 17:20:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:03:51.969 17:20:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:03:51.969 17:20:18 -- nvmf/common.sh@105 -- # continue 2 00:03:51.969 17:20:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:03:51.969 17:20:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:51.969 17:20:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:03:51.969 17:20:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:51.969 17:20:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:03:51.969 17:20:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:03:51.969 17:20:18 -- nvmf/common.sh@105 -- # continue 2 00:03:51.969 17:20:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:03:51.969 17:20:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:03:51.969 17:20:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:03:51.969 17:20:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:03:51.969 17:20:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:03:51.969 17:20:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:03:51.969 17:20:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:03:51.969 17:20:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:03:51.969 17:20:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:03:51.969 17:20:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:03:51.969 17:20:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:03:51.969 17:20:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:03:51.969 17:20:18 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:03:51.969 192.168.100.9' 00:03:51.969 17:20:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:03:51.969 192.168.100.9' 00:03:51.969 17:20:18 -- nvmf/common.sh@446 -- # head -n 1 00:03:51.969 17:20:18 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:03:51.969 17:20:18 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:03:51.969 192.168.100.9' 00:03:51.969 17:20:18 -- nvmf/common.sh@447 -- # tail -n +2 00:03:51.969 17:20:18 -- nvmf/common.sh@447 -- # head -n 1 00:03:51.969 17:20:18 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:03:51.969 17:20:18 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:03:51.969 17:20:18 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:03:51.969 17:20:18 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:03:51.969 17:20:18 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:03:51.969 17:20:18 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:03:51.969 17:20:18 -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:03:51.969 17:20:18 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:51.969 17:20:18 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:52.229 MallocForNvmf0 00:03:52.229 17:20:18 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:52.229 17:20:18 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:52.489 MallocForNvmf1 00:03:52.489 17:20:18 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:03:52.489 17:20:18 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:03:52.748 [2024-04-18 17:20:19.116009] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:03:52.748 [2024-04-18 17:20:19.172425] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17dc950/0x1909900) succeed. 00:03:52.748 [2024-04-18 17:20:19.186722] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17deb40/0x17e97c0) succeed. 00:03:52.748 17:20:19 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:52.748 17:20:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:53.008 17:20:19 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:53.008 17:20:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:53.267 17:20:19 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:53.267 17:20:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:53.526 17:20:19 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:03:53.526 17:20:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:03:53.783 [2024-04-18 17:20:20.188374] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:03:53.783 17:20:20 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:03:53.783 17:20:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:53.784 17:20:20 -- common/autotest_common.sh@10 -- # set +x 00:03:53.784 17:20:20 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:03:53.784 17:20:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:53.784 17:20:20 -- common/autotest_common.sh@10 -- # set +x 00:03:53.784 17:20:20 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:03:53.784 17:20:20 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:53.784 17:20:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:54.041 MallocBdevForConfigChangeCheck 00:03:54.041 17:20:20 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:03:54.041 17:20:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:54.041 17:20:20 -- common/autotest_common.sh@10 -- # set +x 00:03:54.041 17:20:20 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:03:54.041 17:20:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:54.609 17:20:20 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:03:54.609 INFO: shutting down applications... 00:03:54.609 17:20:20 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:03:54.609 17:20:20 -- json_config/json_config.sh@368 -- # json_config_clear target 00:03:54.609 17:20:20 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:03:54.609 17:20:20 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:56.021 Calling clear_iscsi_subsystem 00:03:56.021 Calling clear_nvmf_subsystem 00:03:56.021 Calling clear_nbd_subsystem 00:03:56.021 Calling clear_ublk_subsystem 00:03:56.021 Calling clear_vhost_blk_subsystem 00:03:56.021 Calling clear_vhost_scsi_subsystem 00:03:56.021 Calling clear_bdev_subsystem 00:03:56.021 17:20:22 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:03:56.021 17:20:22 -- json_config/json_config.sh@343 -- # count=100 00:03:56.021 17:20:22 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:03:56.021 17:20:22 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.021 17:20:22 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:56.021 17:20:22 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:56.590 17:20:22 -- json_config/json_config.sh@345 -- # break 00:03:56.590 17:20:22 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:03:56.590 17:20:22 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:03:56.590 17:20:22 -- json_config/common.sh@31 -- # local app=target 00:03:56.590 17:20:22 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:56.590 17:20:22 -- json_config/common.sh@35 -- # [[ -n 1891083 ]] 00:03:56.590 17:20:22 -- json_config/common.sh@38 -- # kill -SIGINT 1891083 00:03:56.590 17:20:22 -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:56.590 17:20:22 -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:56.590 17:20:22 -- json_config/common.sh@41 -- # kill -0 1891083 00:03:56.590 17:20:22 -- json_config/common.sh@45 -- # sleep 0.5 00:03:57.158 17:20:23 -- json_config/common.sh@40 -- # (( i++ )) 00:03:57.158 17:20:23 -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:57.158 17:20:23 -- json_config/common.sh@41 -- # kill -0 1891083 00:03:57.158 17:20:23 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:57.158 17:20:23 -- json_config/common.sh@43 -- # break 00:03:57.158 17:20:23 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:57.158 17:20:23 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:57.158 SPDK target shutdown done 00:03:57.158 17:20:23 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:03:57.158 INFO: relaunching applications... 00:03:57.158 17:20:23 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.158 17:20:23 -- json_config/common.sh@9 -- # local app=target 00:03:57.158 17:20:23 -- json_config/common.sh@10 -- # shift 00:03:57.158 17:20:23 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:57.158 17:20:23 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:57.158 17:20:23 -- json_config/common.sh@15 -- # local app_extra_params= 00:03:57.158 17:20:23 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.158 17:20:23 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.158 17:20:23 -- json_config/common.sh@22 -- # app_pid["$app"]=1893950 00:03:57.158 17:20:23 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.158 17:20:23 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:57.158 Waiting for target to run... 00:03:57.158 17:20:23 -- json_config/common.sh@25 -- # waitforlisten 1893950 /var/tmp/spdk_tgt.sock 00:03:57.158 17:20:23 -- common/autotest_common.sh@817 -- # '[' -z 1893950 ']' 00:03:57.158 17:20:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:57.158 17:20:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:57.158 17:20:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:57.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:57.158 17:20:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:57.158 17:20:23 -- common/autotest_common.sh@10 -- # set +x 00:03:57.158 [2024-04-18 17:20:23.442161] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:03:57.158 [2024-04-18 17:20:23.442258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1893950 ] 00:03:57.158 EAL: No free 2048 kB hugepages reported on node 1 00:03:57.726 [2024-04-18 17:20:23.959915] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.726 [2024-04-18 17:20:24.065080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.018 [2024-04-18 17:20:27.121247] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16ac780/0x1516cc0) succeed. 00:04:01.018 [2024-04-18 17:20:27.135124] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16b11e0/0x1596d40) succeed. 00:04:01.018 [2024-04-18 17:20:27.193306] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:01.586 17:20:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:01.586 17:20:27 -- common/autotest_common.sh@850 -- # return 0 00:04:01.586 17:20:27 -- json_config/common.sh@26 -- # echo '' 00:04:01.586 00:04:01.586 17:20:27 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:01.586 17:20:27 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:01.586 INFO: Checking if target configuration is the same... 00:04:01.586 17:20:27 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.586 17:20:27 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:01.586 17:20:27 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:01.586 + '[' 2 -ne 2 ']' 00:04:01.586 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:01.586 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:01.586 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:01.586 +++ basename /dev/fd/62 00:04:01.586 ++ mktemp /tmp/62.XXX 00:04:01.586 + tmp_file_1=/tmp/62.4Ko 00:04:01.586 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.586 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:01.586 + tmp_file_2=/tmp/spdk_tgt_config.json.Acl 00:04:01.586 + ret=0 00:04:01.586 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:01.844 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:01.844 + diff -u /tmp/62.4Ko /tmp/spdk_tgt_config.json.Acl 00:04:01.844 + echo 'INFO: JSON config files are the same' 00:04:01.844 INFO: JSON config files are the same 00:04:01.844 + rm /tmp/62.4Ko /tmp/spdk_tgt_config.json.Acl 00:04:01.844 + exit 0 00:04:01.844 17:20:28 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:01.844 17:20:28 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:01.844 INFO: changing configuration and checking if this can be detected... 00:04:01.844 17:20:28 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:01.844 17:20:28 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:02.102 17:20:28 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:02.102 17:20:28 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:02.102 17:20:28 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:02.102 + '[' 2 -ne 2 ']' 00:04:02.102 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:02.102 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:02.102 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:02.102 +++ basename /dev/fd/62 00:04:02.102 ++ mktemp /tmp/62.XXX 00:04:02.102 + tmp_file_1=/tmp/62.B55 00:04:02.102 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:02.102 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:02.102 + tmp_file_2=/tmp/spdk_tgt_config.json.IUL 00:04:02.102 + ret=0 00:04:02.102 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:02.361 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:02.622 + diff -u /tmp/62.B55 /tmp/spdk_tgt_config.json.IUL 00:04:02.622 + ret=1 00:04:02.622 + echo '=== Start of file: /tmp/62.B55 ===' 00:04:02.622 + cat /tmp/62.B55 00:04:02.622 + echo '=== End of file: /tmp/62.B55 ===' 00:04:02.622 + echo '' 00:04:02.622 + echo '=== Start of file: /tmp/spdk_tgt_config.json.IUL ===' 00:04:02.622 + cat /tmp/spdk_tgt_config.json.IUL 00:04:02.622 + echo '=== End of file: /tmp/spdk_tgt_config.json.IUL ===' 00:04:02.622 + echo '' 00:04:02.622 + rm /tmp/62.B55 /tmp/spdk_tgt_config.json.IUL 00:04:02.622 + exit 1 00:04:02.622 17:20:28 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:02.622 INFO: configuration change detected. 00:04:02.622 17:20:28 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:02.622 17:20:28 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:02.622 17:20:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:02.622 17:20:28 -- common/autotest_common.sh@10 -- # set +x 00:04:02.622 17:20:28 -- json_config/json_config.sh@307 -- # local ret=0 00:04:02.622 17:20:28 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:02.622 17:20:28 -- json_config/json_config.sh@317 -- # [[ -n 1893950 ]] 00:04:02.622 17:20:28 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:02.622 17:20:28 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:02.622 17:20:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:02.622 17:20:28 -- common/autotest_common.sh@10 -- # set +x 00:04:02.622 17:20:28 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:02.622 17:20:28 -- json_config/json_config.sh@193 -- # uname -s 00:04:02.622 17:20:28 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:02.622 17:20:28 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:02.622 17:20:28 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:02.622 17:20:28 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:02.622 17:20:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:02.622 17:20:28 -- common/autotest_common.sh@10 -- # set +x 00:04:02.622 17:20:28 -- json_config/json_config.sh@323 -- # killprocess 1893950 00:04:02.622 17:20:28 -- common/autotest_common.sh@936 -- # '[' -z 1893950 ']' 00:04:02.622 17:20:28 -- common/autotest_common.sh@940 -- # kill -0 1893950 00:04:02.622 17:20:28 -- common/autotest_common.sh@941 -- # uname 00:04:02.622 17:20:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:02.622 17:20:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1893950 00:04:02.622 17:20:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:02.622 17:20:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:02.622 17:20:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1893950' 00:04:02.622 killing process with pid 1893950 00:04:02.622 17:20:28 -- common/autotest_common.sh@955 -- # kill 1893950 00:04:02.622 17:20:28 -- common/autotest_common.sh@960 -- # wait 1893950 00:04:04.529 17:20:30 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.529 17:20:30 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:04.529 17:20:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:04.529 17:20:30 -- common/autotest_common.sh@10 -- # set +x 00:04:04.529 17:20:30 -- json_config/json_config.sh@328 -- # return 0 00:04:04.529 17:20:30 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:04.529 INFO: Success 00:04:04.529 17:20:30 -- json_config/json_config.sh@1 -- # nvmftestfini 00:04:04.529 17:20:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:04:04.529 17:20:30 -- nvmf/common.sh@117 -- # sync 00:04:04.529 17:20:30 -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:04:04.529 17:20:30 -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:04:04.529 17:20:30 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:04:04.529 17:20:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:04:04.529 17:20:30 -- nvmf/common.sh@484 -- # [[ '' == \t\c\p ]] 00:04:04.529 00:04:04.529 real 0m18.961s 00:04:04.529 user 0m21.652s 00:04:04.529 sys 0m3.542s 00:04:04.529 17:20:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:04.529 17:20:30 -- common/autotest_common.sh@10 -- # set +x 00:04:04.529 ************************************ 00:04:04.529 END TEST json_config 00:04:04.529 ************************************ 00:04:04.529 17:20:30 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:04.529 17:20:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.529 17:20:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.529 17:20:30 -- common/autotest_common.sh@10 -- # set +x 00:04:04.529 ************************************ 00:04:04.529 START TEST json_config_extra_key 00:04:04.529 ************************************ 00:04:04.529 17:20:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:04.529 17:20:30 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:04.529 17:20:30 -- nvmf/common.sh@7 -- # uname -s 00:04:04.529 17:20:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:04.529 17:20:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:04.529 17:20:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:04.529 17:20:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:04.529 17:20:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:04.529 17:20:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:04.529 17:20:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:04.529 17:20:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:04.529 17:20:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:04.529 17:20:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:04.529 17:20:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:04.529 17:20:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:04.529 17:20:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:04.529 17:20:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:04.529 17:20:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:04.529 17:20:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:04.529 17:20:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:04.529 17:20:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:04.529 17:20:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:04.529 17:20:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:04.529 17:20:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.529 17:20:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.529 17:20:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.529 17:20:30 -- paths/export.sh@5 -- # export PATH 00:04:04.529 17:20:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:04.529 17:20:30 -- nvmf/common.sh@47 -- # : 0 00:04:04.529 17:20:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:04.529 17:20:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:04.529 17:20:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:04.529 17:20:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:04.529 17:20:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:04.529 17:20:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:04.529 17:20:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:04.529 17:20:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:04.529 17:20:30 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:04.529 17:20:30 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:04.529 17:20:30 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:04.529 17:20:30 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:04.529 17:20:30 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:04.529 17:20:30 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:04.529 17:20:30 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:04.529 17:20:30 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:04.529 17:20:30 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:04.529 17:20:30 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:04.529 17:20:30 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:04.529 INFO: launching applications... 00:04:04.529 17:20:30 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:04.529 17:20:30 -- json_config/common.sh@9 -- # local app=target 00:04:04.529 17:20:30 -- json_config/common.sh@10 -- # shift 00:04:04.529 17:20:30 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:04.529 17:20:30 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:04.529 17:20:30 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:04.529 17:20:30 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.529 17:20:30 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:04.529 17:20:30 -- json_config/common.sh@22 -- # app_pid["$app"]=1894936 00:04:04.529 17:20:30 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:04.529 17:20:30 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:04.529 Waiting for target to run... 00:04:04.529 17:20:30 -- json_config/common.sh@25 -- # waitforlisten 1894936 /var/tmp/spdk_tgt.sock 00:04:04.529 17:20:30 -- common/autotest_common.sh@817 -- # '[' -z 1894936 ']' 00:04:04.529 17:20:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:04.529 17:20:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:04.529 17:20:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:04.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:04.529 17:20:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:04.529 17:20:30 -- common/autotest_common.sh@10 -- # set +x 00:04:04.529 [2024-04-18 17:20:30.921942] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:04.529 [2024-04-18 17:20:30.922020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1894936 ] 00:04:04.529 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.789 [2024-04-18 17:20:31.257003] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.047 [2024-04-18 17:20:31.344771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.307 17:20:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:05.307 17:20:31 -- common/autotest_common.sh@850 -- # return 0 00:04:05.307 17:20:31 -- json_config/common.sh@26 -- # echo '' 00:04:05.307 00:04:05.307 17:20:31 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:05.307 INFO: shutting down applications... 00:04:05.307 17:20:31 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:05.307 17:20:31 -- json_config/common.sh@31 -- # local app=target 00:04:05.307 17:20:31 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:05.307 17:20:31 -- json_config/common.sh@35 -- # [[ -n 1894936 ]] 00:04:05.307 17:20:31 -- json_config/common.sh@38 -- # kill -SIGINT 1894936 00:04:05.307 17:20:31 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:05.307 17:20:31 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:05.307 17:20:31 -- json_config/common.sh@41 -- # kill -0 1894936 00:04:05.307 17:20:31 -- json_config/common.sh@45 -- # sleep 0.5 00:04:05.895 17:20:32 -- json_config/common.sh@40 -- # (( i++ )) 00:04:05.896 17:20:32 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:05.896 17:20:32 -- json_config/common.sh@41 -- # kill -0 1894936 00:04:05.896 17:20:32 -- json_config/common.sh@45 -- # sleep 0.5 00:04:06.462 17:20:32 -- json_config/common.sh@40 -- # (( i++ )) 00:04:06.462 17:20:32 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:06.462 17:20:32 -- json_config/common.sh@41 -- # kill -0 1894936 00:04:06.462 17:20:32 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:06.462 17:20:32 -- json_config/common.sh@43 -- # break 00:04:06.462 17:20:32 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:06.462 17:20:32 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:06.462 SPDK target shutdown done 00:04:06.462 17:20:32 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:06.462 Success 00:04:06.462 00:04:06.462 real 0m2.028s 00:04:06.462 user 0m1.546s 00:04:06.462 sys 0m0.418s 00:04:06.462 17:20:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:06.462 17:20:32 -- common/autotest_common.sh@10 -- # set +x 00:04:06.462 ************************************ 00:04:06.462 END TEST json_config_extra_key 00:04:06.462 ************************************ 00:04:06.462 17:20:32 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:06.462 17:20:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.462 17:20:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.462 17:20:32 -- common/autotest_common.sh@10 -- # set +x 00:04:06.462 ************************************ 00:04:06.462 START TEST alias_rpc 00:04:06.462 ************************************ 00:04:06.462 17:20:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:06.721 * Looking for test storage... 00:04:06.721 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:04:06.721 17:20:33 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:06.721 17:20:33 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1895259 00:04:06.721 17:20:33 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.721 17:20:33 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1895259 00:04:06.721 17:20:33 -- common/autotest_common.sh@817 -- # '[' -z 1895259 ']' 00:04:06.721 17:20:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.721 17:20:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:06.721 17:20:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.721 17:20:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:06.721 17:20:33 -- common/autotest_common.sh@10 -- # set +x 00:04:06.721 [2024-04-18 17:20:33.070579] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:06.721 [2024-04-18 17:20:33.070677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895259 ] 00:04:06.721 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.721 [2024-04-18 17:20:33.132343] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.721 [2024-04-18 17:20:33.247467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.653 17:20:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:07.653 17:20:33 -- common/autotest_common.sh@850 -- # return 0 00:04:07.653 17:20:33 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:07.911 17:20:34 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1895259 00:04:07.911 17:20:34 -- common/autotest_common.sh@936 -- # '[' -z 1895259 ']' 00:04:07.911 17:20:34 -- common/autotest_common.sh@940 -- # kill -0 1895259 00:04:07.911 17:20:34 -- common/autotest_common.sh@941 -- # uname 00:04:07.911 17:20:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:07.911 17:20:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1895259 00:04:07.911 17:20:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:07.911 17:20:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:07.911 17:20:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1895259' 00:04:07.911 killing process with pid 1895259 00:04:07.911 17:20:34 -- common/autotest_common.sh@955 -- # kill 1895259 00:04:07.911 17:20:34 -- common/autotest_common.sh@960 -- # wait 1895259 00:04:08.476 00:04:08.476 real 0m1.768s 00:04:08.476 user 0m2.005s 00:04:08.476 sys 0m0.452s 00:04:08.476 17:20:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:08.476 17:20:34 -- common/autotest_common.sh@10 -- # set +x 00:04:08.476 ************************************ 00:04:08.476 END TEST alias_rpc 00:04:08.476 ************************************ 00:04:08.476 17:20:34 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:08.476 17:20:34 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:08.476 17:20:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.476 17:20:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.476 17:20:34 -- common/autotest_common.sh@10 -- # set +x 00:04:08.476 ************************************ 00:04:08.476 START TEST spdkcli_tcp 00:04:08.476 ************************************ 00:04:08.476 17:20:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:08.476 * Looking for test storage... 00:04:08.476 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:04:08.476 17:20:34 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:04:08.476 17:20:34 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:08.476 17:20:34 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:04:08.476 17:20:34 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:08.476 17:20:34 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:08.476 17:20:34 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:08.476 17:20:34 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:08.476 17:20:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:08.476 17:20:34 -- common/autotest_common.sh@10 -- # set +x 00:04:08.476 17:20:34 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1895587 00:04:08.476 17:20:34 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:08.476 17:20:34 -- spdkcli/tcp.sh@27 -- # waitforlisten 1895587 00:04:08.476 17:20:34 -- common/autotest_common.sh@817 -- # '[' -z 1895587 ']' 00:04:08.476 17:20:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.476 17:20:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:08.476 17:20:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.476 17:20:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:08.476 17:20:34 -- common/autotest_common.sh@10 -- # set +x 00:04:08.476 [2024-04-18 17:20:34.968193] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:08.476 [2024-04-18 17:20:34.968290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895587 ] 00:04:08.476 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.735 [2024-04-18 17:20:35.025298] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:08.735 [2024-04-18 17:20:35.128807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:08.735 [2024-04-18 17:20:35.128812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.018 17:20:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:09.018 17:20:35 -- common/autotest_common.sh@850 -- # return 0 00:04:09.018 17:20:35 -- spdkcli/tcp.sh@31 -- # socat_pid=1895591 00:04:09.018 17:20:35 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:09.018 17:20:35 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:09.278 [ 00:04:09.278 "bdev_malloc_delete", 00:04:09.278 "bdev_malloc_create", 00:04:09.278 "bdev_null_resize", 00:04:09.278 "bdev_null_delete", 00:04:09.278 "bdev_null_create", 00:04:09.278 "bdev_nvme_cuse_unregister", 00:04:09.278 "bdev_nvme_cuse_register", 00:04:09.278 "bdev_opal_new_user", 00:04:09.278 "bdev_opal_set_lock_state", 00:04:09.278 "bdev_opal_delete", 00:04:09.278 "bdev_opal_get_info", 00:04:09.278 "bdev_opal_create", 00:04:09.278 "bdev_nvme_opal_revert", 00:04:09.278 "bdev_nvme_opal_init", 00:04:09.278 "bdev_nvme_send_cmd", 00:04:09.278 "bdev_nvme_get_path_iostat", 00:04:09.278 "bdev_nvme_get_mdns_discovery_info", 00:04:09.278 "bdev_nvme_stop_mdns_discovery", 00:04:09.278 "bdev_nvme_start_mdns_discovery", 00:04:09.278 "bdev_nvme_set_multipath_policy", 00:04:09.278 "bdev_nvme_set_preferred_path", 00:04:09.278 "bdev_nvme_get_io_paths", 00:04:09.278 "bdev_nvme_remove_error_injection", 00:04:09.278 "bdev_nvme_add_error_injection", 00:04:09.278 "bdev_nvme_get_discovery_info", 00:04:09.278 "bdev_nvme_stop_discovery", 00:04:09.278 "bdev_nvme_start_discovery", 00:04:09.278 "bdev_nvme_get_controller_health_info", 00:04:09.278 "bdev_nvme_disable_controller", 00:04:09.278 "bdev_nvme_enable_controller", 00:04:09.278 "bdev_nvme_reset_controller", 00:04:09.278 "bdev_nvme_get_transport_statistics", 00:04:09.278 "bdev_nvme_apply_firmware", 00:04:09.278 "bdev_nvme_detach_controller", 00:04:09.278 "bdev_nvme_get_controllers", 00:04:09.278 "bdev_nvme_attach_controller", 00:04:09.278 "bdev_nvme_set_hotplug", 00:04:09.278 "bdev_nvme_set_options", 00:04:09.278 "bdev_passthru_delete", 00:04:09.278 "bdev_passthru_create", 00:04:09.278 "bdev_lvol_grow_lvstore", 00:04:09.278 "bdev_lvol_get_lvols", 00:04:09.278 "bdev_lvol_get_lvstores", 00:04:09.278 "bdev_lvol_delete", 00:04:09.278 "bdev_lvol_set_read_only", 00:04:09.278 "bdev_lvol_resize", 00:04:09.278 "bdev_lvol_decouple_parent", 00:04:09.278 "bdev_lvol_inflate", 00:04:09.278 "bdev_lvol_rename", 00:04:09.278 "bdev_lvol_clone_bdev", 00:04:09.278 "bdev_lvol_clone", 00:04:09.278 "bdev_lvol_snapshot", 00:04:09.278 "bdev_lvol_create", 00:04:09.278 "bdev_lvol_delete_lvstore", 00:04:09.278 "bdev_lvol_rename_lvstore", 00:04:09.278 "bdev_lvol_create_lvstore", 00:04:09.278 "bdev_raid_set_options", 00:04:09.278 "bdev_raid_remove_base_bdev", 00:04:09.278 "bdev_raid_add_base_bdev", 00:04:09.278 "bdev_raid_delete", 00:04:09.278 "bdev_raid_create", 00:04:09.278 "bdev_raid_get_bdevs", 00:04:09.278 "bdev_error_inject_error", 00:04:09.278 "bdev_error_delete", 00:04:09.278 "bdev_error_create", 00:04:09.278 "bdev_split_delete", 00:04:09.278 "bdev_split_create", 00:04:09.278 "bdev_delay_delete", 00:04:09.278 "bdev_delay_create", 00:04:09.278 "bdev_delay_update_latency", 00:04:09.278 "bdev_zone_block_delete", 00:04:09.278 "bdev_zone_block_create", 00:04:09.278 "blobfs_create", 00:04:09.278 "blobfs_detect", 00:04:09.278 "blobfs_set_cache_size", 00:04:09.278 "bdev_aio_delete", 00:04:09.278 "bdev_aio_rescan", 00:04:09.278 "bdev_aio_create", 00:04:09.278 "bdev_ftl_set_property", 00:04:09.278 "bdev_ftl_get_properties", 00:04:09.278 "bdev_ftl_get_stats", 00:04:09.278 "bdev_ftl_unmap", 00:04:09.278 "bdev_ftl_unload", 00:04:09.278 "bdev_ftl_delete", 00:04:09.278 "bdev_ftl_load", 00:04:09.278 "bdev_ftl_create", 00:04:09.278 "bdev_virtio_attach_controller", 00:04:09.278 "bdev_virtio_scsi_get_devices", 00:04:09.278 "bdev_virtio_detach_controller", 00:04:09.278 "bdev_virtio_blk_set_hotplug", 00:04:09.278 "bdev_iscsi_delete", 00:04:09.278 "bdev_iscsi_create", 00:04:09.278 "bdev_iscsi_set_options", 00:04:09.278 "accel_error_inject_error", 00:04:09.278 "ioat_scan_accel_module", 00:04:09.278 "dsa_scan_accel_module", 00:04:09.278 "iaa_scan_accel_module", 00:04:09.278 "keyring_file_remove_key", 00:04:09.278 "keyring_file_add_key", 00:04:09.278 "iscsi_set_options", 00:04:09.278 "iscsi_get_auth_groups", 00:04:09.278 "iscsi_auth_group_remove_secret", 00:04:09.278 "iscsi_auth_group_add_secret", 00:04:09.278 "iscsi_delete_auth_group", 00:04:09.278 "iscsi_create_auth_group", 00:04:09.278 "iscsi_set_discovery_auth", 00:04:09.278 "iscsi_get_options", 00:04:09.278 "iscsi_target_node_request_logout", 00:04:09.278 "iscsi_target_node_set_redirect", 00:04:09.278 "iscsi_target_node_set_auth", 00:04:09.278 "iscsi_target_node_add_lun", 00:04:09.278 "iscsi_get_stats", 00:04:09.278 "iscsi_get_connections", 00:04:09.278 "iscsi_portal_group_set_auth", 00:04:09.278 "iscsi_start_portal_group", 00:04:09.278 "iscsi_delete_portal_group", 00:04:09.278 "iscsi_create_portal_group", 00:04:09.278 "iscsi_get_portal_groups", 00:04:09.278 "iscsi_delete_target_node", 00:04:09.278 "iscsi_target_node_remove_pg_ig_maps", 00:04:09.278 "iscsi_target_node_add_pg_ig_maps", 00:04:09.278 "iscsi_create_target_node", 00:04:09.278 "iscsi_get_target_nodes", 00:04:09.278 "iscsi_delete_initiator_group", 00:04:09.278 "iscsi_initiator_group_remove_initiators", 00:04:09.278 "iscsi_initiator_group_add_initiators", 00:04:09.278 "iscsi_create_initiator_group", 00:04:09.278 "iscsi_get_initiator_groups", 00:04:09.278 "nvmf_set_crdt", 00:04:09.278 "nvmf_set_config", 00:04:09.278 "nvmf_set_max_subsystems", 00:04:09.278 "nvmf_subsystem_get_listeners", 00:04:09.278 "nvmf_subsystem_get_qpairs", 00:04:09.278 "nvmf_subsystem_get_controllers", 00:04:09.278 "nvmf_get_stats", 00:04:09.278 "nvmf_get_transports", 00:04:09.278 "nvmf_create_transport", 00:04:09.278 "nvmf_get_targets", 00:04:09.278 "nvmf_delete_target", 00:04:09.278 "nvmf_create_target", 00:04:09.278 "nvmf_subsystem_allow_any_host", 00:04:09.278 "nvmf_subsystem_remove_host", 00:04:09.279 "nvmf_subsystem_add_host", 00:04:09.279 "nvmf_ns_remove_host", 00:04:09.279 "nvmf_ns_add_host", 00:04:09.279 "nvmf_subsystem_remove_ns", 00:04:09.279 "nvmf_subsystem_add_ns", 00:04:09.279 "nvmf_subsystem_listener_set_ana_state", 00:04:09.279 "nvmf_discovery_get_referrals", 00:04:09.279 "nvmf_discovery_remove_referral", 00:04:09.279 "nvmf_discovery_add_referral", 00:04:09.279 "nvmf_subsystem_remove_listener", 00:04:09.279 "nvmf_subsystem_add_listener", 00:04:09.279 "nvmf_delete_subsystem", 00:04:09.279 "nvmf_create_subsystem", 00:04:09.279 "nvmf_get_subsystems", 00:04:09.279 "env_dpdk_get_mem_stats", 00:04:09.279 "nbd_get_disks", 00:04:09.279 "nbd_stop_disk", 00:04:09.279 "nbd_start_disk", 00:04:09.279 "ublk_recover_disk", 00:04:09.279 "ublk_get_disks", 00:04:09.279 "ublk_stop_disk", 00:04:09.279 "ublk_start_disk", 00:04:09.279 "ublk_destroy_target", 00:04:09.279 "ublk_create_target", 00:04:09.279 "virtio_blk_create_transport", 00:04:09.279 "virtio_blk_get_transports", 00:04:09.279 "vhost_controller_set_coalescing", 00:04:09.279 "vhost_get_controllers", 00:04:09.279 "vhost_delete_controller", 00:04:09.279 "vhost_create_blk_controller", 00:04:09.279 "vhost_scsi_controller_remove_target", 00:04:09.279 "vhost_scsi_controller_add_target", 00:04:09.279 "vhost_start_scsi_controller", 00:04:09.279 "vhost_create_scsi_controller", 00:04:09.279 "thread_set_cpumask", 00:04:09.279 "framework_get_scheduler", 00:04:09.279 "framework_set_scheduler", 00:04:09.279 "framework_get_reactors", 00:04:09.279 "thread_get_io_channels", 00:04:09.279 "thread_get_pollers", 00:04:09.279 "thread_get_stats", 00:04:09.279 "framework_monitor_context_switch", 00:04:09.279 "spdk_kill_instance", 00:04:09.279 "log_enable_timestamps", 00:04:09.279 "log_get_flags", 00:04:09.279 "log_clear_flag", 00:04:09.279 "log_set_flag", 00:04:09.279 "log_get_level", 00:04:09.279 "log_set_level", 00:04:09.279 "log_get_print_level", 00:04:09.279 "log_set_print_level", 00:04:09.279 "framework_enable_cpumask_locks", 00:04:09.279 "framework_disable_cpumask_locks", 00:04:09.279 "framework_wait_init", 00:04:09.279 "framework_start_init", 00:04:09.279 "scsi_get_devices", 00:04:09.279 "bdev_get_histogram", 00:04:09.279 "bdev_enable_histogram", 00:04:09.279 "bdev_set_qos_limit", 00:04:09.279 "bdev_set_qd_sampling_period", 00:04:09.279 "bdev_get_bdevs", 00:04:09.279 "bdev_reset_iostat", 00:04:09.279 "bdev_get_iostat", 00:04:09.279 "bdev_examine", 00:04:09.279 "bdev_wait_for_examine", 00:04:09.279 "bdev_set_options", 00:04:09.279 "notify_get_notifications", 00:04:09.279 "notify_get_types", 00:04:09.279 "accel_get_stats", 00:04:09.279 "accel_set_options", 00:04:09.279 "accel_set_driver", 00:04:09.279 "accel_crypto_key_destroy", 00:04:09.279 "accel_crypto_keys_get", 00:04:09.279 "accel_crypto_key_create", 00:04:09.279 "accel_assign_opc", 00:04:09.279 "accel_get_module_info", 00:04:09.279 "accel_get_opc_assignments", 00:04:09.279 "vmd_rescan", 00:04:09.279 "vmd_remove_device", 00:04:09.279 "vmd_enable", 00:04:09.279 "sock_set_default_impl", 00:04:09.279 "sock_impl_set_options", 00:04:09.279 "sock_impl_get_options", 00:04:09.279 "iobuf_get_stats", 00:04:09.279 "iobuf_set_options", 00:04:09.279 "framework_get_pci_devices", 00:04:09.279 "framework_get_config", 00:04:09.279 "framework_get_subsystems", 00:04:09.279 "trace_get_info", 00:04:09.279 "trace_get_tpoint_group_mask", 00:04:09.279 "trace_disable_tpoint_group", 00:04:09.279 "trace_enable_tpoint_group", 00:04:09.279 "trace_clear_tpoint_mask", 00:04:09.279 "trace_set_tpoint_mask", 00:04:09.279 "keyring_get_keys", 00:04:09.279 "spdk_get_version", 00:04:09.279 "rpc_get_methods" 00:04:09.279 ] 00:04:09.279 17:20:35 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:09.279 17:20:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:09.279 17:20:35 -- common/autotest_common.sh@10 -- # set +x 00:04:09.279 17:20:35 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:09.279 17:20:35 -- spdkcli/tcp.sh@38 -- # killprocess 1895587 00:04:09.279 17:20:35 -- common/autotest_common.sh@936 -- # '[' -z 1895587 ']' 00:04:09.279 17:20:35 -- common/autotest_common.sh@940 -- # kill -0 1895587 00:04:09.279 17:20:35 -- common/autotest_common.sh@941 -- # uname 00:04:09.279 17:20:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:09.279 17:20:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1895587 00:04:09.279 17:20:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:09.279 17:20:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:09.279 17:20:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1895587' 00:04:09.279 killing process with pid 1895587 00:04:09.279 17:20:35 -- common/autotest_common.sh@955 -- # kill 1895587 00:04:09.279 17:20:35 -- common/autotest_common.sh@960 -- # wait 1895587 00:04:10.057 00:04:10.057 real 0m1.260s 00:04:10.057 user 0m2.188s 00:04:10.057 sys 0m0.439s 00:04:10.057 17:20:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:10.057 17:20:36 -- common/autotest_common.sh@10 -- # set +x 00:04:10.057 ************************************ 00:04:10.057 END TEST spdkcli_tcp 00:04:10.057 ************************************ 00:04:10.057 17:20:36 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:10.057 17:20:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.057 17:20:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.057 17:20:36 -- common/autotest_common.sh@10 -- # set +x 00:04:10.057 ************************************ 00:04:10.057 START TEST dpdk_mem_utility 00:04:10.057 ************************************ 00:04:10.057 17:20:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:10.057 * Looking for test storage... 00:04:10.057 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:04:10.057 17:20:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:10.057 17:20:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1895794 00:04:10.057 17:20:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.057 17:20:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1895794 00:04:10.057 17:20:36 -- common/autotest_common.sh@817 -- # '[' -z 1895794 ']' 00:04:10.057 17:20:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.057 17:20:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:10.057 17:20:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.057 17:20:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:10.057 17:20:36 -- common/autotest_common.sh@10 -- # set +x 00:04:10.057 [2024-04-18 17:20:36.350998] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:10.057 [2024-04-18 17:20:36.351092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895794 ] 00:04:10.057 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.057 [2024-04-18 17:20:36.407725] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.057 [2024-04-18 17:20:36.511312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.315 17:20:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:10.315 17:20:36 -- common/autotest_common.sh@850 -- # return 0 00:04:10.315 17:20:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:10.315 17:20:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:10.315 17:20:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.315 17:20:36 -- common/autotest_common.sh@10 -- # set +x 00:04:10.315 { 00:04:10.315 "filename": "/tmp/spdk_mem_dump.txt" 00:04:10.315 } 00:04:10.315 17:20:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.315 17:20:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:10.315 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:10.315 1 heaps totaling size 814.000000 MiB 00:04:10.315 size: 814.000000 MiB heap id: 0 00:04:10.315 end heaps---------- 00:04:10.315 8 mempools totaling size 598.116089 MiB 00:04:10.315 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:10.315 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:10.315 size: 84.521057 MiB name: bdev_io_1895794 00:04:10.315 size: 51.011292 MiB name: evtpool_1895794 00:04:10.315 size: 50.003479 MiB name: msgpool_1895794 00:04:10.315 size: 21.763794 MiB name: PDU_Pool 00:04:10.315 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:10.315 size: 0.026123 MiB name: Session_Pool 00:04:10.315 end mempools------- 00:04:10.315 6 memzones totaling size 4.142822 MiB 00:04:10.315 size: 1.000366 MiB name: RG_ring_0_1895794 00:04:10.315 size: 1.000366 MiB name: RG_ring_1_1895794 00:04:10.315 size: 1.000366 MiB name: RG_ring_4_1895794 00:04:10.315 size: 1.000366 MiB name: RG_ring_5_1895794 00:04:10.315 size: 0.125366 MiB name: RG_ring_2_1895794 00:04:10.315 size: 0.015991 MiB name: RG_ring_3_1895794 00:04:10.315 end memzones------- 00:04:10.315 17:20:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:10.572 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:10.572 list of free elements. size: 12.519348 MiB 00:04:10.572 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:10.572 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:10.572 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:10.572 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:10.572 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:10.572 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:10.572 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:10.572 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:10.573 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:10.573 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:10.573 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:10.573 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:10.573 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:10.573 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:10.573 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:10.573 list of standard malloc elements. size: 199.218079 MiB 00:04:10.573 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:10.573 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:10.573 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:10.573 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:10.573 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:10.573 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:10.573 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:10.573 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:10.573 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:10.573 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:10.573 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:10.573 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:10.573 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:10.573 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:10.573 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:10.573 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:10.573 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:10.573 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:10.573 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:10.573 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:10.573 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:10.573 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:10.573 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:10.573 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:10.573 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:10.573 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:10.573 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:10.573 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:10.573 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:10.573 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:10.573 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:10.573 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:10.573 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:10.573 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:10.573 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:10.573 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:10.573 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:10.573 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:10.573 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:10.573 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:10.573 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:10.573 list of memzone associated elements. size: 602.262573 MiB 00:04:10.573 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:10.573 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:10.573 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:10.573 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:10.573 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:10.573 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1895794_0 00:04:10.573 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:10.573 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1895794_0 00:04:10.573 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:10.573 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1895794_0 00:04:10.573 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:10.573 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:10.573 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:10.573 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:10.573 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:10.573 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1895794 00:04:10.573 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:10.573 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1895794 00:04:10.573 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:10.573 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1895794 00:04:10.573 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:10.573 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:10.573 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:10.573 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:10.573 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:10.573 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:10.573 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:10.573 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:10.573 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:10.573 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1895794 00:04:10.573 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:10.573 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1895794 00:04:10.573 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:10.573 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1895794 00:04:10.573 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:10.573 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1895794 00:04:10.573 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:10.573 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1895794 00:04:10.573 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:10.573 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:10.573 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:10.573 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:10.573 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:10.573 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:10.573 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:10.573 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1895794 00:04:10.573 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:10.573 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:10.573 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:10.573 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:10.573 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:10.573 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1895794 00:04:10.573 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:10.573 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:10.573 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:10.573 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1895794 00:04:10.573 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:10.573 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1895794 00:04:10.573 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:10.573 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:10.573 17:20:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:10.573 17:20:36 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1895794 00:04:10.573 17:20:36 -- common/autotest_common.sh@936 -- # '[' -z 1895794 ']' 00:04:10.573 17:20:36 -- common/autotest_common.sh@940 -- # kill -0 1895794 00:04:10.573 17:20:36 -- common/autotest_common.sh@941 -- # uname 00:04:10.573 17:20:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:10.573 17:20:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1895794 00:04:10.573 17:20:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:10.573 17:20:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:10.573 17:20:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1895794' 00:04:10.573 killing process with pid 1895794 00:04:10.574 17:20:36 -- common/autotest_common.sh@955 -- # kill 1895794 00:04:10.574 17:20:36 -- common/autotest_common.sh@960 -- # wait 1895794 00:04:11.140 00:04:11.140 real 0m1.125s 00:04:11.140 user 0m1.069s 00:04:11.140 sys 0m0.412s 00:04:11.140 17:20:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:11.140 17:20:37 -- common/autotest_common.sh@10 -- # set +x 00:04:11.140 ************************************ 00:04:11.140 END TEST dpdk_mem_utility 00:04:11.140 ************************************ 00:04:11.140 17:20:37 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:11.140 17:20:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.140 17:20:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.140 17:20:37 -- common/autotest_common.sh@10 -- # set +x 00:04:11.140 ************************************ 00:04:11.140 START TEST event 00:04:11.140 ************************************ 00:04:11.140 17:20:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:11.140 * Looking for test storage... 00:04:11.140 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:04:11.140 17:20:37 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:11.140 17:20:37 -- bdev/nbd_common.sh@6 -- # set -e 00:04:11.140 17:20:37 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:11.140 17:20:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:11.140 17:20:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.140 17:20:37 -- common/autotest_common.sh@10 -- # set +x 00:04:11.140 ************************************ 00:04:11.140 START TEST event_perf 00:04:11.140 ************************************ 00:04:11.140 17:20:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:11.140 Running I/O for 1 seconds...[2024-04-18 17:20:37.662207] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:11.140 [2024-04-18 17:20:37.662273] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896005 ] 00:04:11.398 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.398 [2024-04-18 17:20:37.726146] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:11.398 [2024-04-18 17:20:37.844448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.398 [2024-04-18 17:20:37.844505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:11.398 [2024-04-18 17:20:37.844623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:11.398 [2024-04-18 17:20:37.844626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.779 Running I/O for 1 seconds... 00:04:12.779 lcore 0: 232595 00:04:12.779 lcore 1: 232595 00:04:12.779 lcore 2: 232595 00:04:12.779 lcore 3: 232595 00:04:12.779 done. 00:04:12.779 00:04:12.779 real 0m1.321s 00:04:12.779 user 0m4.227s 00:04:12.779 sys 0m0.089s 00:04:12.779 17:20:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:12.779 17:20:38 -- common/autotest_common.sh@10 -- # set +x 00:04:12.779 ************************************ 00:04:12.779 END TEST event_perf 00:04:12.779 ************************************ 00:04:12.779 17:20:38 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:12.779 17:20:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:12.779 17:20:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:12.779 17:20:38 -- common/autotest_common.sh@10 -- # set +x 00:04:12.779 ************************************ 00:04:12.779 START TEST event_reactor 00:04:12.779 ************************************ 00:04:12.779 17:20:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:12.779 [2024-04-18 17:20:39.100191] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:12.779 [2024-04-18 17:20:39.100252] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896177 ] 00:04:12.779 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.779 [2024-04-18 17:20:39.162790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.779 [2024-04-18 17:20:39.277247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.192 test_start 00:04:14.192 oneshot 00:04:14.192 tick 100 00:04:14.192 tick 100 00:04:14.192 tick 250 00:04:14.192 tick 100 00:04:14.192 tick 100 00:04:14.192 tick 100 00:04:14.192 tick 250 00:04:14.192 tick 500 00:04:14.192 tick 100 00:04:14.192 tick 100 00:04:14.192 tick 250 00:04:14.192 tick 100 00:04:14.192 tick 100 00:04:14.192 test_end 00:04:14.192 00:04:14.192 real 0m1.312s 00:04:14.192 user 0m1.228s 00:04:14.192 sys 0m0.077s 00:04:14.192 17:20:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:14.192 17:20:40 -- common/autotest_common.sh@10 -- # set +x 00:04:14.192 ************************************ 00:04:14.192 END TEST event_reactor 00:04:14.192 ************************************ 00:04:14.192 17:20:40 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:14.192 17:20:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:14.192 17:20:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.192 17:20:40 -- common/autotest_common.sh@10 -- # set +x 00:04:14.192 ************************************ 00:04:14.192 START TEST event_reactor_perf 00:04:14.192 ************************************ 00:04:14.192 17:20:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:14.192 [2024-04-18 17:20:40.523204] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:14.192 [2024-04-18 17:20:40.523270] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896453 ] 00:04:14.192 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.192 [2024-04-18 17:20:40.585694] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.192 [2024-04-18 17:20:40.704055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.571 test_start 00:04:15.571 test_end 00:04:15.571 Performance: 353260 events per second 00:04:15.571 00:04:15.571 real 0m1.318s 00:04:15.571 user 0m1.231s 00:04:15.571 sys 0m0.081s 00:04:15.571 17:20:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:15.571 17:20:41 -- common/autotest_common.sh@10 -- # set +x 00:04:15.571 ************************************ 00:04:15.571 END TEST event_reactor_perf 00:04:15.571 ************************************ 00:04:15.571 17:20:41 -- event/event.sh@49 -- # uname -s 00:04:15.571 17:20:41 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:15.571 17:20:41 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:15.571 17:20:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.571 17:20:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.571 17:20:41 -- common/autotest_common.sh@10 -- # set +x 00:04:15.571 ************************************ 00:04:15.571 START TEST event_scheduler 00:04:15.571 ************************************ 00:04:15.571 17:20:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:15.571 * Looking for test storage... 00:04:15.571 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:04:15.571 17:20:41 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:15.571 17:20:41 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1896645 00:04:15.571 17:20:41 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:15.571 17:20:41 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.571 17:20:41 -- scheduler/scheduler.sh@37 -- # waitforlisten 1896645 00:04:15.571 17:20:41 -- common/autotest_common.sh@817 -- # '[' -z 1896645 ']' 00:04:15.571 17:20:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.571 17:20:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:15.571 17:20:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.571 17:20:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:15.571 17:20:41 -- common/autotest_common.sh@10 -- # set +x 00:04:15.571 [2024-04-18 17:20:42.032703] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:15.571 [2024-04-18 17:20:42.032789] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896645 ] 00:04:15.571 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.571 [2024-04-18 17:20:42.089547] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:15.830 [2024-04-18 17:20:42.195864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.830 [2024-04-18 17:20:42.195920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.830 [2024-04-18 17:20:42.195988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:15.830 [2024-04-18 17:20:42.195991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:15.830 17:20:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:15.830 17:20:42 -- common/autotest_common.sh@850 -- # return 0 00:04:15.830 17:20:42 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:15.830 17:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:15.830 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:04:15.830 POWER: Env isn't set yet! 00:04:15.830 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:15.830 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:15.830 POWER: Cannot get available frequencies of lcore 0 00:04:15.830 POWER: Attempting to initialise PSTAT power management... 00:04:15.830 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:15.830 POWER: Initialized successfully for lcore 0 power management 00:04:15.830 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:15.830 POWER: Initialized successfully for lcore 1 power management 00:04:15.830 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:15.830 POWER: Initialized successfully for lcore 2 power management 00:04:15.830 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:15.830 POWER: Initialized successfully for lcore 3 power management 00:04:15.830 17:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:15.830 17:20:42 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:15.830 17:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:15.830 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:04:15.830 [2024-04-18 17:20:42.362117] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:15.830 17:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:15.830 17:20:42 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:15.830 17:20:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.830 17:20:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.830 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.089 ************************************ 00:04:16.089 START TEST scheduler_create_thread 00:04:16.089 ************************************ 00:04:16.089 17:20:42 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:04:16.089 17:20:42 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:16.089 17:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.089 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.089 2 00:04:16.089 17:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:16.089 17:20:42 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:16.089 17:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.089 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.089 3 00:04:16.089 17:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:16.089 17:20:42 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:16.089 17:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.089 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.089 4 00:04:16.089 17:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:16.089 17:20:42 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:16.089 17:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.089 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.089 5 00:04:16.089 17:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:16.089 17:20:42 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:16.089 17:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.089 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.089 6 00:04:16.089 17:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:16.089 17:20:42 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:16.089 17:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.089 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.089 7 00:04:16.089 17:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:16.089 17:20:42 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:16.089 17:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.089 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.089 8 00:04:16.089 17:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:16.089 17:20:42 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:16.089 17:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.089 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.089 9 00:04:16.089 17:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:16.089 17:20:42 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:16.089 17:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.089 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.089 10 00:04:16.089 17:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:16.089 17:20:42 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:16.089 17:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.089 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.089 17:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:16.089 17:20:42 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:16.089 17:20:42 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:16.089 17:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.089 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.089 17:20:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:16.089 17:20:42 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:16.089 17:20:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.089 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.658 17:20:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:16.658 17:20:43 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:16.658 17:20:43 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:16.658 17:20:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.658 17:20:43 -- common/autotest_common.sh@10 -- # set +x 00:04:18.036 17:20:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:18.036 00:04:18.036 real 0m1.756s 00:04:18.036 user 0m0.011s 00:04:18.036 sys 0m0.002s 00:04:18.036 17:20:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:18.036 17:20:44 -- common/autotest_common.sh@10 -- # set +x 00:04:18.036 ************************************ 00:04:18.036 END TEST scheduler_create_thread 00:04:18.036 ************************************ 00:04:18.036 17:20:44 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:18.036 17:20:44 -- scheduler/scheduler.sh@46 -- # killprocess 1896645 00:04:18.036 17:20:44 -- common/autotest_common.sh@936 -- # '[' -z 1896645 ']' 00:04:18.036 17:20:44 -- common/autotest_common.sh@940 -- # kill -0 1896645 00:04:18.036 17:20:44 -- common/autotest_common.sh@941 -- # uname 00:04:18.036 17:20:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:18.036 17:20:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1896645 00:04:18.036 17:20:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:18.036 17:20:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:18.036 17:20:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1896645' 00:04:18.036 killing process with pid 1896645 00:04:18.036 17:20:44 -- common/autotest_common.sh@955 -- # kill 1896645 00:04:18.036 17:20:44 -- common/autotest_common.sh@960 -- # wait 1896645 00:04:18.294 [2024-04-18 17:20:44.702080] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:18.294 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:18.294 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:18.294 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:18.294 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:18.294 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:18.294 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:18.294 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:18.294 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:18.553 00:04:18.553 real 0m3.014s 00:04:18.553 user 0m3.915s 00:04:18.553 sys 0m0.387s 00:04:18.553 17:20:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:18.553 17:20:44 -- common/autotest_common.sh@10 -- # set +x 00:04:18.553 ************************************ 00:04:18.553 END TEST event_scheduler 00:04:18.553 ************************************ 00:04:18.553 17:20:44 -- event/event.sh@51 -- # modprobe -n nbd 00:04:18.553 17:20:44 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:18.553 17:20:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.553 17:20:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.553 17:20:44 -- common/autotest_common.sh@10 -- # set +x 00:04:18.553 ************************************ 00:04:18.553 START TEST app_repeat 00:04:18.553 ************************************ 00:04:18.553 17:20:45 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:04:18.553 17:20:45 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.553 17:20:45 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.553 17:20:45 -- event/event.sh@13 -- # local nbd_list 00:04:18.553 17:20:45 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.553 17:20:45 -- event/event.sh@14 -- # local bdev_list 00:04:18.553 17:20:45 -- event/event.sh@15 -- # local repeat_times=4 00:04:18.553 17:20:45 -- event/event.sh@17 -- # modprobe nbd 00:04:18.553 17:20:45 -- event/event.sh@19 -- # repeat_pid=1897106 00:04:18.553 17:20:45 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:18.553 17:20:45 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.553 17:20:45 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1897106' 00:04:18.553 Process app_repeat pid: 1897106 00:04:18.553 17:20:45 -- event/event.sh@23 -- # for i in {0..2} 00:04:18.553 17:20:45 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:18.553 spdk_app_start Round 0 00:04:18.553 17:20:45 -- event/event.sh@25 -- # waitforlisten 1897106 /var/tmp/spdk-nbd.sock 00:04:18.553 17:20:45 -- common/autotest_common.sh@817 -- # '[' -z 1897106 ']' 00:04:18.553 17:20:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:18.553 17:20:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:18.553 17:20:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:18.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:18.553 17:20:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:18.553 17:20:45 -- common/autotest_common.sh@10 -- # set +x 00:04:18.813 [2024-04-18 17:20:45.099597] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:18.813 [2024-04-18 17:20:45.099658] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1897106 ] 00:04:18.813 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.813 [2024-04-18 17:20:45.162566] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.813 [2024-04-18 17:20:45.276338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.813 [2024-04-18 17:20:45.276343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.071 17:20:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:19.071 17:20:45 -- common/autotest_common.sh@850 -- # return 0 00:04:19.071 17:20:45 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.329 Malloc0 00:04:19.329 17:20:45 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.587 Malloc1 00:04:19.587 17:20:45 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.587 17:20:45 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.587 17:20:45 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.587 17:20:45 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:19.587 17:20:45 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.587 17:20:45 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:19.587 17:20:45 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:19.587 17:20:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.587 17:20:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:19.587 17:20:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:19.587 17:20:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.587 17:20:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:19.587 17:20:45 -- bdev/nbd_common.sh@12 -- # local i 00:04:19.587 17:20:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:19.587 17:20:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.587 17:20:45 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:19.845 /dev/nbd0 00:04:19.845 17:20:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:19.845 17:20:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:19.845 17:20:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:19.845 17:20:46 -- common/autotest_common.sh@855 -- # local i 00:04:19.845 17:20:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:19.845 17:20:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:19.845 17:20:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:19.845 17:20:46 -- common/autotest_common.sh@859 -- # break 00:04:19.845 17:20:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:19.845 17:20:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:19.845 17:20:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.845 1+0 records in 00:04:19.845 1+0 records out 00:04:19.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199519 s, 20.5 MB/s 00:04:19.845 17:20:46 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:19.845 17:20:46 -- common/autotest_common.sh@872 -- # size=4096 00:04:19.845 17:20:46 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:19.845 17:20:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:19.845 17:20:46 -- common/autotest_common.sh@875 -- # return 0 00:04:19.845 17:20:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.845 17:20:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.845 17:20:46 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:20.103 /dev/nbd1 00:04:20.103 17:20:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:20.103 17:20:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:20.103 17:20:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:20.103 17:20:46 -- common/autotest_common.sh@855 -- # local i 00:04:20.103 17:20:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:20.104 17:20:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:20.104 17:20:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:20.104 17:20:46 -- common/autotest_common.sh@859 -- # break 00:04:20.104 17:20:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:20.104 17:20:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:20.104 17:20:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:20.104 1+0 records in 00:04:20.104 1+0 records out 00:04:20.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210056 s, 19.5 MB/s 00:04:20.104 17:20:46 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:20.104 17:20:46 -- common/autotest_common.sh@872 -- # size=4096 00:04:20.104 17:20:46 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:20.104 17:20:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:20.104 17:20:46 -- common/autotest_common.sh@875 -- # return 0 00:04:20.104 17:20:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:20.104 17:20:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.104 17:20:46 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.104 17:20:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.104 17:20:46 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:20.361 { 00:04:20.361 "nbd_device": "/dev/nbd0", 00:04:20.361 "bdev_name": "Malloc0" 00:04:20.361 }, 00:04:20.361 { 00:04:20.361 "nbd_device": "/dev/nbd1", 00:04:20.361 "bdev_name": "Malloc1" 00:04:20.361 } 00:04:20.361 ]' 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:20.361 { 00:04:20.361 "nbd_device": "/dev/nbd0", 00:04:20.361 "bdev_name": "Malloc0" 00:04:20.361 }, 00:04:20.361 { 00:04:20.361 "nbd_device": "/dev/nbd1", 00:04:20.361 "bdev_name": "Malloc1" 00:04:20.361 } 00:04:20.361 ]' 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:20.361 /dev/nbd1' 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:20.361 /dev/nbd1' 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@65 -- # count=2 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@95 -- # count=2 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:20.361 256+0 records in 00:04:20.361 256+0 records out 00:04:20.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00378552 s, 277 MB/s 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:20.361 256+0 records in 00:04:20.361 256+0 records out 00:04:20.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235422 s, 44.5 MB/s 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:20.361 256+0 records in 00:04:20.361 256+0 records out 00:04:20.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291425 s, 36.0 MB/s 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@51 -- # local i 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.361 17:20:46 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:20.619 17:20:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:20.619 17:20:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:20.619 17:20:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:20.619 17:20:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.619 17:20:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.619 17:20:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:20.619 17:20:47 -- bdev/nbd_common.sh@41 -- # break 00:04:20.619 17:20:47 -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.619 17:20:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.619 17:20:47 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:20.877 17:20:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:20.877 17:20:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:20.877 17:20:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:20.877 17:20:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.877 17:20:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.877 17:20:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:20.877 17:20:47 -- bdev/nbd_common.sh@41 -- # break 00:04:20.877 17:20:47 -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.877 17:20:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.877 17:20:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.877 17:20:47 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:21.135 17:20:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:21.135 17:20:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:21.135 17:20:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:21.135 17:20:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:21.135 17:20:47 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:21.135 17:20:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:21.135 17:20:47 -- bdev/nbd_common.sh@65 -- # true 00:04:21.135 17:20:47 -- bdev/nbd_common.sh@65 -- # count=0 00:04:21.135 17:20:47 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:21.135 17:20:47 -- bdev/nbd_common.sh@104 -- # count=0 00:04:21.135 17:20:47 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:21.135 17:20:47 -- bdev/nbd_common.sh@109 -- # return 0 00:04:21.135 17:20:47 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:21.395 17:20:47 -- event/event.sh@35 -- # sleep 3 00:04:21.655 [2024-04-18 17:20:48.120915] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:21.915 [2024-04-18 17:20:48.234517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.915 [2024-04-18 17:20:48.234518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.915 [2024-04-18 17:20:48.291966] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:21.915 [2024-04-18 17:20:48.292038] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:24.451 17:20:50 -- event/event.sh@23 -- # for i in {0..2} 00:04:24.451 17:20:50 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:24.451 spdk_app_start Round 1 00:04:24.451 17:20:50 -- event/event.sh@25 -- # waitforlisten 1897106 /var/tmp/spdk-nbd.sock 00:04:24.451 17:20:50 -- common/autotest_common.sh@817 -- # '[' -z 1897106 ']' 00:04:24.451 17:20:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.451 17:20:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:24.451 17:20:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.451 17:20:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:24.451 17:20:50 -- common/autotest_common.sh@10 -- # set +x 00:04:24.709 17:20:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:24.709 17:20:51 -- common/autotest_common.sh@850 -- # return 0 00:04:24.709 17:20:51 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.967 Malloc0 00:04:24.967 17:20:51 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.225 Malloc1 00:04:25.225 17:20:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.225 17:20:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.225 17:20:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.225 17:20:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:25.225 17:20:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.225 17:20:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:25.225 17:20:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.225 17:20:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.225 17:20:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.225 17:20:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:25.225 17:20:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.225 17:20:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:25.225 17:20:51 -- bdev/nbd_common.sh@12 -- # local i 00:04:25.225 17:20:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:25.225 17:20:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.225 17:20:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:25.482 /dev/nbd0 00:04:25.482 17:20:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:25.482 17:20:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:25.482 17:20:51 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:25.482 17:20:51 -- common/autotest_common.sh@855 -- # local i 00:04:25.482 17:20:51 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:25.482 17:20:51 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:25.482 17:20:51 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:25.482 17:20:51 -- common/autotest_common.sh@859 -- # break 00:04:25.482 17:20:51 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:25.482 17:20:51 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:25.482 17:20:51 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.482 1+0 records in 00:04:25.482 1+0 records out 00:04:25.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000129398 s, 31.7 MB/s 00:04:25.482 17:20:51 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:25.482 17:20:51 -- common/autotest_common.sh@872 -- # size=4096 00:04:25.482 17:20:51 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:25.482 17:20:51 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:25.482 17:20:51 -- common/autotest_common.sh@875 -- # return 0 00:04:25.482 17:20:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.482 17:20:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.483 17:20:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:25.740 /dev/nbd1 00:04:25.740 17:20:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:25.740 17:20:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:25.740 17:20:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:25.740 17:20:52 -- common/autotest_common.sh@855 -- # local i 00:04:25.740 17:20:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:25.740 17:20:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:25.740 17:20:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:25.740 17:20:52 -- common/autotest_common.sh@859 -- # break 00:04:25.740 17:20:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:25.740 17:20:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:25.740 17:20:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.740 1+0 records in 00:04:25.740 1+0 records out 00:04:25.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208189 s, 19.7 MB/s 00:04:25.740 17:20:52 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:25.740 17:20:52 -- common/autotest_common.sh@872 -- # size=4096 00:04:25.740 17:20:52 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:25.740 17:20:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:25.740 17:20:52 -- common/autotest_common.sh@875 -- # return 0 00:04:25.740 17:20:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.740 17:20:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.740 17:20:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.740 17:20:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.740 17:20:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:25.998 { 00:04:25.998 "nbd_device": "/dev/nbd0", 00:04:25.998 "bdev_name": "Malloc0" 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "nbd_device": "/dev/nbd1", 00:04:25.998 "bdev_name": "Malloc1" 00:04:25.998 } 00:04:25.998 ]' 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:25.998 { 00:04:25.998 "nbd_device": "/dev/nbd0", 00:04:25.998 "bdev_name": "Malloc0" 00:04:25.998 }, 00:04:25.998 { 00:04:25.998 "nbd_device": "/dev/nbd1", 00:04:25.998 "bdev_name": "Malloc1" 00:04:25.998 } 00:04:25.998 ]' 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:25.998 /dev/nbd1' 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:25.998 /dev/nbd1' 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@65 -- # count=2 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@95 -- # count=2 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:25.998 256+0 records in 00:04:25.998 256+0 records out 00:04:25.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503432 s, 208 MB/s 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.998 17:20:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:25.998 256+0 records in 00:04:25.998 256+0 records out 00:04:25.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238278 s, 44.0 MB/s 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:25.999 256+0 records in 00:04:25.999 256+0 records out 00:04:25.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260831 s, 40.2 MB/s 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@51 -- # local i 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:25.999 17:20:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:26.257 17:20:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:26.257 17:20:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:26.257 17:20:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:26.257 17:20:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.257 17:20:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.257 17:20:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:26.257 17:20:52 -- bdev/nbd_common.sh@41 -- # break 00:04:26.257 17:20:52 -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.257 17:20:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.257 17:20:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:26.516 17:20:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:26.516 17:20:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:26.516 17:20:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:26.516 17:20:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.516 17:20:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.516 17:20:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:26.516 17:20:52 -- bdev/nbd_common.sh@41 -- # break 00:04:26.516 17:20:52 -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.516 17:20:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.516 17:20:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.516 17:20:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.773 17:20:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:26.773 17:20:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:26.773 17:20:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.773 17:20:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:26.773 17:20:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:26.773 17:20:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.773 17:20:53 -- bdev/nbd_common.sh@65 -- # true 00:04:26.773 17:20:53 -- bdev/nbd_common.sh@65 -- # count=0 00:04:26.773 17:20:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:26.773 17:20:53 -- bdev/nbd_common.sh@104 -- # count=0 00:04:26.773 17:20:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:26.773 17:20:53 -- bdev/nbd_common.sh@109 -- # return 0 00:04:26.773 17:20:53 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:27.031 17:20:53 -- event/event.sh@35 -- # sleep 3 00:04:27.322 [2024-04-18 17:20:53.791772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.580 [2024-04-18 17:20:53.904686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.580 [2024-04-18 17:20:53.904691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.580 [2024-04-18 17:20:53.964099] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:27.580 [2024-04-18 17:20:53.964180] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:30.115 17:20:56 -- event/event.sh@23 -- # for i in {0..2} 00:04:30.115 17:20:56 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:30.115 spdk_app_start Round 2 00:04:30.115 17:20:56 -- event/event.sh@25 -- # waitforlisten 1897106 /var/tmp/spdk-nbd.sock 00:04:30.115 17:20:56 -- common/autotest_common.sh@817 -- # '[' -z 1897106 ']' 00:04:30.115 17:20:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:30.115 17:20:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:30.115 17:20:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:30.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:30.115 17:20:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:30.115 17:20:56 -- common/autotest_common.sh@10 -- # set +x 00:04:30.373 17:20:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:30.373 17:20:56 -- common/autotest_common.sh@850 -- # return 0 00:04:30.373 17:20:56 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.634 Malloc0 00:04:30.634 17:20:57 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:30.893 Malloc1 00:04:30.893 17:20:57 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.893 17:20:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.893 17:20:57 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.893 17:20:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:30.893 17:20:57 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.893 17:20:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:30.893 17:20:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:30.893 17:20:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:30.893 17:20:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:30.893 17:20:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:30.893 17:20:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:30.893 17:20:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:30.893 17:20:57 -- bdev/nbd_common.sh@12 -- # local i 00:04:30.893 17:20:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:30.893 17:20:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:30.893 17:20:57 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:31.153 /dev/nbd0 00:04:31.153 17:20:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:31.153 17:20:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:31.153 17:20:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:31.153 17:20:57 -- common/autotest_common.sh@855 -- # local i 00:04:31.153 17:20:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:31.153 17:20:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:31.153 17:20:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:31.153 17:20:57 -- common/autotest_common.sh@859 -- # break 00:04:31.153 17:20:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:31.154 17:20:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:31.154 17:20:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.154 1+0 records in 00:04:31.154 1+0 records out 00:04:31.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190729 s, 21.5 MB/s 00:04:31.154 17:20:57 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:31.154 17:20:57 -- common/autotest_common.sh@872 -- # size=4096 00:04:31.154 17:20:57 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:31.154 17:20:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:31.154 17:20:57 -- common/autotest_common.sh@875 -- # return 0 00:04:31.154 17:20:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.154 17:20:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.154 17:20:57 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:31.450 /dev/nbd1 00:04:31.450 17:20:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:31.450 17:20:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:31.450 17:20:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:31.450 17:20:57 -- common/autotest_common.sh@855 -- # local i 00:04:31.450 17:20:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:31.450 17:20:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:31.450 17:20:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:31.450 17:20:57 -- common/autotest_common.sh@859 -- # break 00:04:31.450 17:20:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:31.450 17:20:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:31.450 17:20:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.450 1+0 records in 00:04:31.450 1+0 records out 00:04:31.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199831 s, 20.5 MB/s 00:04:31.450 17:20:57 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:31.450 17:20:57 -- common/autotest_common.sh@872 -- # size=4096 00:04:31.450 17:20:57 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:31.450 17:20:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:31.450 17:20:57 -- common/autotest_common.sh@875 -- # return 0 00:04:31.450 17:20:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.450 17:20:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.450 17:20:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.450 17:20:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.450 17:20:57 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:31.709 { 00:04:31.709 "nbd_device": "/dev/nbd0", 00:04:31.709 "bdev_name": "Malloc0" 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "nbd_device": "/dev/nbd1", 00:04:31.709 "bdev_name": "Malloc1" 00:04:31.709 } 00:04:31.709 ]' 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:31.709 { 00:04:31.709 "nbd_device": "/dev/nbd0", 00:04:31.709 "bdev_name": "Malloc0" 00:04:31.709 }, 00:04:31.709 { 00:04:31.709 "nbd_device": "/dev/nbd1", 00:04:31.709 "bdev_name": "Malloc1" 00:04:31.709 } 00:04:31.709 ]' 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:31.709 /dev/nbd1' 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:31.709 /dev/nbd1' 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@65 -- # count=2 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@95 -- # count=2 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:31.709 256+0 records in 00:04:31.709 256+0 records out 00:04:31.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501605 s, 209 MB/s 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:31.709 256+0 records in 00:04:31.709 256+0 records out 00:04:31.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241338 s, 43.4 MB/s 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:31.709 256+0 records in 00:04:31.709 256+0 records out 00:04:31.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025872 s, 40.5 MB/s 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:31.709 17:20:58 -- bdev/nbd_common.sh@51 -- # local i 00:04:31.710 17:20:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.710 17:20:58 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:31.968 17:20:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:31.968 17:20:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:31.968 17:20:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:31.968 17:20:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:31.968 17:20:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:31.968 17:20:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:31.968 17:20:58 -- bdev/nbd_common.sh@41 -- # break 00:04:31.968 17:20:58 -- bdev/nbd_common.sh@45 -- # return 0 00:04:31.969 17:20:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:31.969 17:20:58 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:32.227 17:20:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:32.227 17:20:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:32.227 17:20:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:32.227 17:20:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.227 17:20:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.227 17:20:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:32.227 17:20:58 -- bdev/nbd_common.sh@41 -- # break 00:04:32.227 17:20:58 -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.227 17:20:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.227 17:20:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.227 17:20:58 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.485 17:20:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:32.485 17:20:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:32.485 17:20:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.485 17:20:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:32.485 17:20:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:32.485 17:20:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.486 17:20:58 -- bdev/nbd_common.sh@65 -- # true 00:04:32.486 17:20:58 -- bdev/nbd_common.sh@65 -- # count=0 00:04:32.486 17:20:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:32.486 17:20:58 -- bdev/nbd_common.sh@104 -- # count=0 00:04:32.486 17:20:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:32.486 17:20:58 -- bdev/nbd_common.sh@109 -- # return 0 00:04:32.486 17:20:58 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:32.745 17:20:59 -- event/event.sh@35 -- # sleep 3 00:04:33.005 [2024-04-18 17:20:59.488389] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.263 [2024-04-18 17:20:59.594737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.263 [2024-04-18 17:20:59.594740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.263 [2024-04-18 17:20:59.656701] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:33.263 [2024-04-18 17:20:59.656782] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:35.791 17:21:02 -- event/event.sh@38 -- # waitforlisten 1897106 /var/tmp/spdk-nbd.sock 00:04:35.791 17:21:02 -- common/autotest_common.sh@817 -- # '[' -z 1897106 ']' 00:04:35.791 17:21:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:35.791 17:21:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:35.791 17:21:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:35.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:35.791 17:21:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:35.791 17:21:02 -- common/autotest_common.sh@10 -- # set +x 00:04:36.050 17:21:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:36.050 17:21:02 -- common/autotest_common.sh@850 -- # return 0 00:04:36.050 17:21:02 -- event/event.sh@39 -- # killprocess 1897106 00:04:36.050 17:21:02 -- common/autotest_common.sh@936 -- # '[' -z 1897106 ']' 00:04:36.050 17:21:02 -- common/autotest_common.sh@940 -- # kill -0 1897106 00:04:36.050 17:21:02 -- common/autotest_common.sh@941 -- # uname 00:04:36.050 17:21:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:36.050 17:21:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1897106 00:04:36.050 17:21:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:36.050 17:21:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:36.051 17:21:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1897106' 00:04:36.051 killing process with pid 1897106 00:04:36.051 17:21:02 -- common/autotest_common.sh@955 -- # kill 1897106 00:04:36.051 17:21:02 -- common/autotest_common.sh@960 -- # wait 1897106 00:04:36.310 spdk_app_start is called in Round 0. 00:04:36.310 Shutdown signal received, stop current app iteration 00:04:36.310 Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 reinitialization... 00:04:36.310 spdk_app_start is called in Round 1. 00:04:36.310 Shutdown signal received, stop current app iteration 00:04:36.310 Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 reinitialization... 00:04:36.310 spdk_app_start is called in Round 2. 00:04:36.310 Shutdown signal received, stop current app iteration 00:04:36.310 Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 reinitialization... 00:04:36.310 spdk_app_start is called in Round 3. 00:04:36.310 Shutdown signal received, stop current app iteration 00:04:36.310 17:21:02 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:36.310 17:21:02 -- event/event.sh@42 -- # return 0 00:04:36.310 00:04:36.310 real 0m17.674s 00:04:36.310 user 0m38.528s 00:04:36.310 sys 0m3.233s 00:04:36.310 17:21:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.310 17:21:02 -- common/autotest_common.sh@10 -- # set +x 00:04:36.310 ************************************ 00:04:36.310 END TEST app_repeat 00:04:36.310 ************************************ 00:04:36.310 17:21:02 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:36.310 17:21:02 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:36.310 17:21:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.310 17:21:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.310 17:21:02 -- common/autotest_common.sh@10 -- # set +x 00:04:36.569 ************************************ 00:04:36.569 START TEST cpu_locks 00:04:36.569 ************************************ 00:04:36.569 17:21:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:36.569 * Looking for test storage... 00:04:36.569 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:04:36.569 17:21:02 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:36.569 17:21:02 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:36.569 17:21:02 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:36.569 17:21:02 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:36.569 17:21:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.569 17:21:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.569 17:21:02 -- common/autotest_common.sh@10 -- # set +x 00:04:36.569 ************************************ 00:04:36.569 START TEST default_locks 00:04:36.569 ************************************ 00:04:36.569 17:21:03 -- common/autotest_common.sh@1111 -- # default_locks 00:04:36.569 17:21:03 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1899579 00:04:36.569 17:21:03 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.569 17:21:03 -- event/cpu_locks.sh@47 -- # waitforlisten 1899579 00:04:36.569 17:21:03 -- common/autotest_common.sh@817 -- # '[' -z 1899579 ']' 00:04:36.569 17:21:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.569 17:21:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:36.569 17:21:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.569 17:21:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:36.569 17:21:03 -- common/autotest_common.sh@10 -- # set +x 00:04:36.569 [2024-04-18 17:21:03.052606] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:36.569 [2024-04-18 17:21:03.052692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899579 ] 00:04:36.569 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.829 [2024-04-18 17:21:03.111274] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.829 [2024-04-18 17:21:03.220171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.088 17:21:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:37.088 17:21:03 -- common/autotest_common.sh@850 -- # return 0 00:04:37.088 17:21:03 -- event/cpu_locks.sh@49 -- # locks_exist 1899579 00:04:37.088 17:21:03 -- event/cpu_locks.sh@22 -- # lslocks -p 1899579 00:04:37.088 17:21:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:37.347 lslocks: write error 00:04:37.347 17:21:03 -- event/cpu_locks.sh@50 -- # killprocess 1899579 00:04:37.347 17:21:03 -- common/autotest_common.sh@936 -- # '[' -z 1899579 ']' 00:04:37.347 17:21:03 -- common/autotest_common.sh@940 -- # kill -0 1899579 00:04:37.347 17:21:03 -- common/autotest_common.sh@941 -- # uname 00:04:37.347 17:21:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:37.347 17:21:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1899579 00:04:37.347 17:21:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:37.347 17:21:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:37.347 17:21:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1899579' 00:04:37.347 killing process with pid 1899579 00:04:37.347 17:21:03 -- common/autotest_common.sh@955 -- # kill 1899579 00:04:37.347 17:21:03 -- common/autotest_common.sh@960 -- # wait 1899579 00:04:37.914 17:21:04 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1899579 00:04:37.914 17:21:04 -- common/autotest_common.sh@638 -- # local es=0 00:04:37.914 17:21:04 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1899579 00:04:37.914 17:21:04 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:37.914 17:21:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:37.914 17:21:04 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:37.914 17:21:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:37.914 17:21:04 -- common/autotest_common.sh@641 -- # waitforlisten 1899579 00:04:37.914 17:21:04 -- common/autotest_common.sh@817 -- # '[' -z 1899579 ']' 00:04:37.914 17:21:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.914 17:21:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:37.914 17:21:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.914 17:21:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:37.914 17:21:04 -- common/autotest_common.sh@10 -- # set +x 00:04:37.914 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1899579) - No such process 00:04:37.914 ERROR: process (pid: 1899579) is no longer running 00:04:37.914 17:21:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:37.914 17:21:04 -- common/autotest_common.sh@850 -- # return 1 00:04:37.914 17:21:04 -- common/autotest_common.sh@641 -- # es=1 00:04:37.914 17:21:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:37.914 17:21:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:37.914 17:21:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:37.914 17:21:04 -- event/cpu_locks.sh@54 -- # no_locks 00:04:37.914 17:21:04 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:37.914 17:21:04 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:37.914 17:21:04 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:37.914 00:04:37.914 real 0m1.183s 00:04:37.914 user 0m1.101s 00:04:37.914 sys 0m0.523s 00:04:37.914 17:21:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:37.914 17:21:04 -- common/autotest_common.sh@10 -- # set +x 00:04:37.914 ************************************ 00:04:37.914 END TEST default_locks 00:04:37.914 ************************************ 00:04:37.914 17:21:04 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:37.914 17:21:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.914 17:21:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.914 17:21:04 -- common/autotest_common.sh@10 -- # set +x 00:04:37.914 ************************************ 00:04:37.914 START TEST default_locks_via_rpc 00:04:37.914 ************************************ 00:04:37.914 17:21:04 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:04:37.914 17:21:04 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1899750 00:04:37.914 17:21:04 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.914 17:21:04 -- event/cpu_locks.sh@63 -- # waitforlisten 1899750 00:04:37.914 17:21:04 -- common/autotest_common.sh@817 -- # '[' -z 1899750 ']' 00:04:37.914 17:21:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.914 17:21:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:37.914 17:21:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.914 17:21:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:37.914 17:21:04 -- common/autotest_common.sh@10 -- # set +x 00:04:37.914 [2024-04-18 17:21:04.359444] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:37.914 [2024-04-18 17:21:04.359525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899750 ] 00:04:37.914 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.914 [2024-04-18 17:21:04.417664] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.174 [2024-04-18 17:21:04.528654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.434 17:21:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:38.434 17:21:04 -- common/autotest_common.sh@850 -- # return 0 00:04:38.434 17:21:04 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:38.434 17:21:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:38.434 17:21:04 -- common/autotest_common.sh@10 -- # set +x 00:04:38.434 17:21:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:38.434 17:21:04 -- event/cpu_locks.sh@67 -- # no_locks 00:04:38.434 17:21:04 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:38.434 17:21:04 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:38.434 17:21:04 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:38.434 17:21:04 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:38.434 17:21:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:38.434 17:21:04 -- common/autotest_common.sh@10 -- # set +x 00:04:38.434 17:21:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:38.434 17:21:04 -- event/cpu_locks.sh@71 -- # locks_exist 1899750 00:04:38.434 17:21:04 -- event/cpu_locks.sh@22 -- # lslocks -p 1899750 00:04:38.434 17:21:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:38.692 17:21:05 -- event/cpu_locks.sh@73 -- # killprocess 1899750 00:04:38.692 17:21:05 -- common/autotest_common.sh@936 -- # '[' -z 1899750 ']' 00:04:38.692 17:21:05 -- common/autotest_common.sh@940 -- # kill -0 1899750 00:04:38.692 17:21:05 -- common/autotest_common.sh@941 -- # uname 00:04:38.692 17:21:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:38.692 17:21:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1899750 00:04:38.692 17:21:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:38.692 17:21:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:38.692 17:21:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1899750' 00:04:38.692 killing process with pid 1899750 00:04:38.692 17:21:05 -- common/autotest_common.sh@955 -- # kill 1899750 00:04:38.692 17:21:05 -- common/autotest_common.sh@960 -- # wait 1899750 00:04:39.259 00:04:39.259 real 0m1.192s 00:04:39.259 user 0m1.126s 00:04:39.259 sys 0m0.515s 00:04:39.259 17:21:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:39.259 17:21:05 -- common/autotest_common.sh@10 -- # set +x 00:04:39.259 ************************************ 00:04:39.259 END TEST default_locks_via_rpc 00:04:39.259 ************************************ 00:04:39.259 17:21:05 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:39.259 17:21:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.259 17:21:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.259 17:21:05 -- common/autotest_common.sh@10 -- # set +x 00:04:39.259 ************************************ 00:04:39.259 START TEST non_locking_app_on_locked_coremask 00:04:39.259 ************************************ 00:04:39.259 17:21:05 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:04:39.259 17:21:05 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1899916 00:04:39.259 17:21:05 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.259 17:21:05 -- event/cpu_locks.sh@81 -- # waitforlisten 1899916 /var/tmp/spdk.sock 00:04:39.259 17:21:05 -- common/autotest_common.sh@817 -- # '[' -z 1899916 ']' 00:04:39.259 17:21:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.259 17:21:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:39.259 17:21:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.259 17:21:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:39.259 17:21:05 -- common/autotest_common.sh@10 -- # set +x 00:04:39.259 [2024-04-18 17:21:05.675098] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:39.259 [2024-04-18 17:21:05.675184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899916 ] 00:04:39.259 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.259 [2024-04-18 17:21:05.738667] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.518 [2024-04-18 17:21:05.854901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.777 17:21:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:39.777 17:21:06 -- common/autotest_common.sh@850 -- # return 0 00:04:39.777 17:21:06 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1899935 00:04:39.777 17:21:06 -- event/cpu_locks.sh@85 -- # waitforlisten 1899935 /var/tmp/spdk2.sock 00:04:39.777 17:21:06 -- common/autotest_common.sh@817 -- # '[' -z 1899935 ']' 00:04:39.777 17:21:06 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:39.777 17:21:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:39.777 17:21:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:39.777 17:21:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:39.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:39.777 17:21:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:39.777 17:21:06 -- common/autotest_common.sh@10 -- # set +x 00:04:39.777 [2024-04-18 17:21:06.168705] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:39.777 [2024-04-18 17:21:06.168829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899935 ] 00:04:39.777 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.777 [2024-04-18 17:21:06.260995] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:39.777 [2024-04-18 17:21:06.261038] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.036 [2024-04-18 17:21:06.485295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.605 17:21:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:40.605 17:21:07 -- common/autotest_common.sh@850 -- # return 0 00:04:40.605 17:21:07 -- event/cpu_locks.sh@87 -- # locks_exist 1899916 00:04:40.605 17:21:07 -- event/cpu_locks.sh@22 -- # lslocks -p 1899916 00:04:40.605 17:21:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:41.172 lslocks: write error 00:04:41.172 17:21:07 -- event/cpu_locks.sh@89 -- # killprocess 1899916 00:04:41.172 17:21:07 -- common/autotest_common.sh@936 -- # '[' -z 1899916 ']' 00:04:41.172 17:21:07 -- common/autotest_common.sh@940 -- # kill -0 1899916 00:04:41.172 17:21:07 -- common/autotest_common.sh@941 -- # uname 00:04:41.172 17:21:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:41.172 17:21:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1899916 00:04:41.172 17:21:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:41.172 17:21:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:41.172 17:21:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1899916' 00:04:41.172 killing process with pid 1899916 00:04:41.172 17:21:07 -- common/autotest_common.sh@955 -- # kill 1899916 00:04:41.172 17:21:07 -- common/autotest_common.sh@960 -- # wait 1899916 00:04:42.109 17:21:08 -- event/cpu_locks.sh@90 -- # killprocess 1899935 00:04:42.109 17:21:08 -- common/autotest_common.sh@936 -- # '[' -z 1899935 ']' 00:04:42.109 17:21:08 -- common/autotest_common.sh@940 -- # kill -0 1899935 00:04:42.109 17:21:08 -- common/autotest_common.sh@941 -- # uname 00:04:42.109 17:21:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:42.109 17:21:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1899935 00:04:42.109 17:21:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:42.109 17:21:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:42.109 17:21:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1899935' 00:04:42.109 killing process with pid 1899935 00:04:42.109 17:21:08 -- common/autotest_common.sh@955 -- # kill 1899935 00:04:42.109 17:21:08 -- common/autotest_common.sh@960 -- # wait 1899935 00:04:42.368 00:04:42.368 real 0m3.263s 00:04:42.368 user 0m3.386s 00:04:42.368 sys 0m1.078s 00:04:42.368 17:21:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:42.368 17:21:08 -- common/autotest_common.sh@10 -- # set +x 00:04:42.368 ************************************ 00:04:42.368 END TEST non_locking_app_on_locked_coremask 00:04:42.368 ************************************ 00:04:42.627 17:21:08 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:42.627 17:21:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.627 17:21:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.627 17:21:08 -- common/autotest_common.sh@10 -- # set +x 00:04:42.627 ************************************ 00:04:42.627 START TEST locking_app_on_unlocked_coremask 00:04:42.627 ************************************ 00:04:42.627 17:21:09 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:04:42.627 17:21:09 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1900840 00:04:42.627 17:21:09 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:42.627 17:21:09 -- event/cpu_locks.sh@99 -- # waitforlisten 1900840 /var/tmp/spdk.sock 00:04:42.627 17:21:09 -- common/autotest_common.sh@817 -- # '[' -z 1900840 ']' 00:04:42.627 17:21:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.627 17:21:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:42.627 17:21:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.627 17:21:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:42.627 17:21:09 -- common/autotest_common.sh@10 -- # set +x 00:04:42.627 [2024-04-18 17:21:09.055025] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:42.627 [2024-04-18 17:21:09.055122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1900840 ] 00:04:42.627 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.627 [2024-04-18 17:21:09.116692] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:42.627 [2024-04-18 17:21:09.116741] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.885 [2024-04-18 17:21:09.231894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.144 17:21:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:43.144 17:21:09 -- common/autotest_common.sh@850 -- # return 0 00:04:43.144 17:21:09 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1900886 00:04:43.144 17:21:09 -- event/cpu_locks.sh@103 -- # waitforlisten 1900886 /var/tmp/spdk2.sock 00:04:43.144 17:21:09 -- common/autotest_common.sh@817 -- # '[' -z 1900886 ']' 00:04:43.144 17:21:09 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:43.144 17:21:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.144 17:21:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:43.144 17:21:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.144 17:21:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:43.144 17:21:09 -- common/autotest_common.sh@10 -- # set +x 00:04:43.144 [2024-04-18 17:21:09.533836] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:43.144 [2024-04-18 17:21:09.533925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1900886 ] 00:04:43.144 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.144 [2024-04-18 17:21:09.626666] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.405 [2024-04-18 17:21:09.857874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.975 17:21:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:43.975 17:21:10 -- common/autotest_common.sh@850 -- # return 0 00:04:43.975 17:21:10 -- event/cpu_locks.sh@105 -- # locks_exist 1900886 00:04:43.975 17:21:10 -- event/cpu_locks.sh@22 -- # lslocks -p 1900886 00:04:43.975 17:21:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.542 lslocks: write error 00:04:44.542 17:21:10 -- event/cpu_locks.sh@107 -- # killprocess 1900840 00:04:44.542 17:21:10 -- common/autotest_common.sh@936 -- # '[' -z 1900840 ']' 00:04:44.542 17:21:10 -- common/autotest_common.sh@940 -- # kill -0 1900840 00:04:44.542 17:21:10 -- common/autotest_common.sh@941 -- # uname 00:04:44.542 17:21:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:44.542 17:21:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1900840 00:04:44.542 17:21:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:44.542 17:21:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:44.542 17:21:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1900840' 00:04:44.542 killing process with pid 1900840 00:04:44.542 17:21:10 -- common/autotest_common.sh@955 -- # kill 1900840 00:04:44.542 17:21:10 -- common/autotest_common.sh@960 -- # wait 1900840 00:04:45.478 17:21:11 -- event/cpu_locks.sh@108 -- # killprocess 1900886 00:04:45.478 17:21:11 -- common/autotest_common.sh@936 -- # '[' -z 1900886 ']' 00:04:45.478 17:21:11 -- common/autotest_common.sh@940 -- # kill -0 1900886 00:04:45.478 17:21:11 -- common/autotest_common.sh@941 -- # uname 00:04:45.478 17:21:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:45.478 17:21:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1900886 00:04:45.478 17:21:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:45.478 17:21:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:45.478 17:21:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1900886' 00:04:45.478 killing process with pid 1900886 00:04:45.478 17:21:11 -- common/autotest_common.sh@955 -- # kill 1900886 00:04:45.478 17:21:11 -- common/autotest_common.sh@960 -- # wait 1900886 00:04:45.738 00:04:45.738 real 0m3.268s 00:04:45.738 user 0m3.422s 00:04:45.738 sys 0m1.028s 00:04:45.738 17:21:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:45.738 17:21:12 -- common/autotest_common.sh@10 -- # set +x 00:04:45.738 ************************************ 00:04:45.738 END TEST locking_app_on_unlocked_coremask 00:04:45.738 ************************************ 00:04:45.998 17:21:12 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:45.998 17:21:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.998 17:21:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.998 17:21:12 -- common/autotest_common.sh@10 -- # set +x 00:04:45.998 ************************************ 00:04:45.998 START TEST locking_app_on_locked_coremask 00:04:45.998 ************************************ 00:04:45.998 17:21:12 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:04:45.998 17:21:12 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1901310 00:04:45.998 17:21:12 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.998 17:21:12 -- event/cpu_locks.sh@116 -- # waitforlisten 1901310 /var/tmp/spdk.sock 00:04:45.998 17:21:12 -- common/autotest_common.sh@817 -- # '[' -z 1901310 ']' 00:04:45.998 17:21:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.998 17:21:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:45.998 17:21:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.998 17:21:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:45.998 17:21:12 -- common/autotest_common.sh@10 -- # set +x 00:04:45.998 [2024-04-18 17:21:12.450456] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:45.998 [2024-04-18 17:21:12.450541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901310 ] 00:04:45.998 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.998 [2024-04-18 17:21:12.507093] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.257 [2024-04-18 17:21:12.617019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.515 17:21:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:46.515 17:21:12 -- common/autotest_common.sh@850 -- # return 0 00:04:46.515 17:21:12 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1901351 00:04:46.515 17:21:12 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:46.515 17:21:12 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1901351 /var/tmp/spdk2.sock 00:04:46.515 17:21:12 -- common/autotest_common.sh@638 -- # local es=0 00:04:46.515 17:21:12 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1901351 /var/tmp/spdk2.sock 00:04:46.515 17:21:12 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:46.515 17:21:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:46.515 17:21:12 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:46.515 17:21:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:46.515 17:21:12 -- common/autotest_common.sh@641 -- # waitforlisten 1901351 /var/tmp/spdk2.sock 00:04:46.515 17:21:12 -- common/autotest_common.sh@817 -- # '[' -z 1901351 ']' 00:04:46.515 17:21:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.515 17:21:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:46.515 17:21:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.515 17:21:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:46.515 17:21:12 -- common/autotest_common.sh@10 -- # set +x 00:04:46.515 [2024-04-18 17:21:12.920612] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:46.515 [2024-04-18 17:21:12.920720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901351 ] 00:04:46.515 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.515 [2024-04-18 17:21:13.014729] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1901310 has claimed it. 00:04:46.515 [2024-04-18 17:21:13.014779] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:47.088 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1901351) - No such process 00:04:47.088 ERROR: process (pid: 1901351) is no longer running 00:04:47.088 17:21:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:47.088 17:21:13 -- common/autotest_common.sh@850 -- # return 1 00:04:47.088 17:21:13 -- common/autotest_common.sh@641 -- # es=1 00:04:47.088 17:21:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:47.088 17:21:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:47.088 17:21:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:47.088 17:21:13 -- event/cpu_locks.sh@122 -- # locks_exist 1901310 00:04:47.088 17:21:13 -- event/cpu_locks.sh@22 -- # lslocks -p 1901310 00:04:47.088 17:21:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:47.652 lslocks: write error 00:04:47.652 17:21:13 -- event/cpu_locks.sh@124 -- # killprocess 1901310 00:04:47.652 17:21:13 -- common/autotest_common.sh@936 -- # '[' -z 1901310 ']' 00:04:47.652 17:21:13 -- common/autotest_common.sh@940 -- # kill -0 1901310 00:04:47.652 17:21:13 -- common/autotest_common.sh@941 -- # uname 00:04:47.652 17:21:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:47.652 17:21:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1901310 00:04:47.652 17:21:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:47.652 17:21:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:47.652 17:21:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1901310' 00:04:47.652 killing process with pid 1901310 00:04:47.652 17:21:13 -- common/autotest_common.sh@955 -- # kill 1901310 00:04:47.652 17:21:13 -- common/autotest_common.sh@960 -- # wait 1901310 00:04:47.932 00:04:47.932 real 0m2.033s 00:04:47.932 user 0m2.187s 00:04:47.932 sys 0m0.643s 00:04:47.932 17:21:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.932 17:21:14 -- common/autotest_common.sh@10 -- # set +x 00:04:47.932 ************************************ 00:04:47.932 END TEST locking_app_on_locked_coremask 00:04:47.932 ************************************ 00:04:47.932 17:21:14 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:47.932 17:21:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.932 17:21:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.932 17:21:14 -- common/autotest_common.sh@10 -- # set +x 00:04:48.196 ************************************ 00:04:48.196 START TEST locking_overlapped_coremask 00:04:48.196 ************************************ 00:04:48.196 17:21:14 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:04:48.196 17:21:14 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1901616 00:04:48.196 17:21:14 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:48.196 17:21:14 -- event/cpu_locks.sh@133 -- # waitforlisten 1901616 /var/tmp/spdk.sock 00:04:48.196 17:21:14 -- common/autotest_common.sh@817 -- # '[' -z 1901616 ']' 00:04:48.196 17:21:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.196 17:21:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:48.196 17:21:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.196 17:21:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:48.196 17:21:14 -- common/autotest_common.sh@10 -- # set +x 00:04:48.196 [2024-04-18 17:21:14.604283] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:48.196 [2024-04-18 17:21:14.604390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901616 ] 00:04:48.196 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.196 [2024-04-18 17:21:14.661852] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:48.454 [2024-04-18 17:21:14.772436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.454 [2024-04-18 17:21:14.772493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.454 [2024-04-18 17:21:14.772496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.712 17:21:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:48.712 17:21:15 -- common/autotest_common.sh@850 -- # return 0 00:04:48.712 17:21:15 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1901629 00:04:48.712 17:21:15 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1901629 /var/tmp/spdk2.sock 00:04:48.712 17:21:15 -- common/autotest_common.sh@638 -- # local es=0 00:04:48.712 17:21:15 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:48.712 17:21:15 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1901629 /var/tmp/spdk2.sock 00:04:48.712 17:21:15 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:48.712 17:21:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:48.712 17:21:15 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:48.712 17:21:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:48.712 17:21:15 -- common/autotest_common.sh@641 -- # waitforlisten 1901629 /var/tmp/spdk2.sock 00:04:48.712 17:21:15 -- common/autotest_common.sh@817 -- # '[' -z 1901629 ']' 00:04:48.712 17:21:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.712 17:21:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:48.712 17:21:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.712 17:21:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:48.712 17:21:15 -- common/autotest_common.sh@10 -- # set +x 00:04:48.712 [2024-04-18 17:21:15.075783] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:48.712 [2024-04-18 17:21:15.075876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901629 ] 00:04:48.712 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.712 [2024-04-18 17:21:15.169541] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1901616 has claimed it. 00:04:48.712 [2024-04-18 17:21:15.169594] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:49.281 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1901629) - No such process 00:04:49.281 ERROR: process (pid: 1901629) is no longer running 00:04:49.281 17:21:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:49.281 17:21:15 -- common/autotest_common.sh@850 -- # return 1 00:04:49.281 17:21:15 -- common/autotest_common.sh@641 -- # es=1 00:04:49.281 17:21:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:49.281 17:21:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:49.281 17:21:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:49.281 17:21:15 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:49.281 17:21:15 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:49.281 17:21:15 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:49.281 17:21:15 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:49.281 17:21:15 -- event/cpu_locks.sh@141 -- # killprocess 1901616 00:04:49.281 17:21:15 -- common/autotest_common.sh@936 -- # '[' -z 1901616 ']' 00:04:49.281 17:21:15 -- common/autotest_common.sh@940 -- # kill -0 1901616 00:04:49.281 17:21:15 -- common/autotest_common.sh@941 -- # uname 00:04:49.281 17:21:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:49.281 17:21:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1901616 00:04:49.281 17:21:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:49.281 17:21:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:49.281 17:21:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1901616' 00:04:49.281 killing process with pid 1901616 00:04:49.281 17:21:15 -- common/autotest_common.sh@955 -- # kill 1901616 00:04:49.281 17:21:15 -- common/autotest_common.sh@960 -- # wait 1901616 00:04:49.851 00:04:49.851 real 0m1.711s 00:04:49.851 user 0m4.540s 00:04:49.851 sys 0m0.470s 00:04:49.851 17:21:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.851 17:21:16 -- common/autotest_common.sh@10 -- # set +x 00:04:49.851 ************************************ 00:04:49.851 END TEST locking_overlapped_coremask 00:04:49.851 ************************************ 00:04:49.851 17:21:16 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:49.851 17:21:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.851 17:21:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.851 17:21:16 -- common/autotest_common.sh@10 -- # set +x 00:04:49.851 ************************************ 00:04:49.851 START TEST locking_overlapped_coremask_via_rpc 00:04:49.851 ************************************ 00:04:49.851 17:21:16 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:04:49.851 17:21:16 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1901918 00:04:49.851 17:21:16 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:49.851 17:21:16 -- event/cpu_locks.sh@149 -- # waitforlisten 1901918 /var/tmp/spdk.sock 00:04:49.851 17:21:16 -- common/autotest_common.sh@817 -- # '[' -z 1901918 ']' 00:04:49.851 17:21:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.851 17:21:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:49.851 17:21:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.851 17:21:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:49.851 17:21:16 -- common/autotest_common.sh@10 -- # set +x 00:04:50.111 [2024-04-18 17:21:16.434120] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:50.111 [2024-04-18 17:21:16.434194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901918 ] 00:04:50.111 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.111 [2024-04-18 17:21:16.496535] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.111 [2024-04-18 17:21:16.496571] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:50.111 [2024-04-18 17:21:16.614769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.111 [2024-04-18 17:21:16.614796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.112 [2024-04-18 17:21:16.614800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.370 17:21:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:50.370 17:21:16 -- common/autotest_common.sh@850 -- # return 0 00:04:50.370 17:21:16 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1901928 00:04:50.370 17:21:16 -- event/cpu_locks.sh@153 -- # waitforlisten 1901928 /var/tmp/spdk2.sock 00:04:50.370 17:21:16 -- common/autotest_common.sh@817 -- # '[' -z 1901928 ']' 00:04:50.370 17:21:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.370 17:21:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:50.370 17:21:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.370 17:21:16 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:50.370 17:21:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:50.370 17:21:16 -- common/autotest_common.sh@10 -- # set +x 00:04:50.630 [2024-04-18 17:21:16.922593] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:50.630 [2024-04-18 17:21:16.922685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901928 ] 00:04:50.630 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.630 [2024-04-18 17:21:17.010082] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.630 [2024-04-18 17:21:17.010116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:50.889 [2024-04-18 17:21:17.227086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.889 [2024-04-18 17:21:17.230441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:04:50.889 [2024-04-18 17:21:17.230444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.455 17:21:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:51.455 17:21:17 -- common/autotest_common.sh@850 -- # return 0 00:04:51.455 17:21:17 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:51.455 17:21:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.455 17:21:17 -- common/autotest_common.sh@10 -- # set +x 00:04:51.455 17:21:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:51.455 17:21:17 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.455 17:21:17 -- common/autotest_common.sh@638 -- # local es=0 00:04:51.455 17:21:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.455 17:21:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:51.455 17:21:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:51.455 17:21:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:51.455 17:21:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:51.455 17:21:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.455 17:21:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:51.455 17:21:17 -- common/autotest_common.sh@10 -- # set +x 00:04:51.455 [2024-04-18 17:21:17.851479] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1901918 has claimed it. 00:04:51.455 request: 00:04:51.455 { 00:04:51.455 "method": "framework_enable_cpumask_locks", 00:04:51.455 "req_id": 1 00:04:51.455 } 00:04:51.455 Got JSON-RPC error response 00:04:51.455 response: 00:04:51.455 { 00:04:51.455 "code": -32603, 00:04:51.455 "message": "Failed to claim CPU core: 2" 00:04:51.455 } 00:04:51.455 17:21:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:51.455 17:21:17 -- common/autotest_common.sh@641 -- # es=1 00:04:51.455 17:21:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:51.455 17:21:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:51.455 17:21:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:51.455 17:21:17 -- event/cpu_locks.sh@158 -- # waitforlisten 1901918 /var/tmp/spdk.sock 00:04:51.455 17:21:17 -- common/autotest_common.sh@817 -- # '[' -z 1901918 ']' 00:04:51.455 17:21:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.455 17:21:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:51.455 17:21:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.455 17:21:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:51.455 17:21:17 -- common/autotest_common.sh@10 -- # set +x 00:04:51.713 17:21:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:51.713 17:21:18 -- common/autotest_common.sh@850 -- # return 0 00:04:51.713 17:21:18 -- event/cpu_locks.sh@159 -- # waitforlisten 1901928 /var/tmp/spdk2.sock 00:04:51.713 17:21:18 -- common/autotest_common.sh@817 -- # '[' -z 1901928 ']' 00:04:51.713 17:21:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.713 17:21:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:51.713 17:21:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.713 17:21:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:51.713 17:21:18 -- common/autotest_common.sh@10 -- # set +x 00:04:51.972 17:21:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:51.972 17:21:18 -- common/autotest_common.sh@850 -- # return 0 00:04:51.972 17:21:18 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:51.972 17:21:18 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:51.972 17:21:18 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:51.972 17:21:18 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:51.972 00:04:51.972 real 0m1.960s 00:04:51.972 user 0m1.009s 00:04:51.972 sys 0m0.171s 00:04:51.972 17:21:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.972 17:21:18 -- common/autotest_common.sh@10 -- # set +x 00:04:51.972 ************************************ 00:04:51.972 END TEST locking_overlapped_coremask_via_rpc 00:04:51.972 ************************************ 00:04:51.972 17:21:18 -- event/cpu_locks.sh@174 -- # cleanup 00:04:51.972 17:21:18 -- event/cpu_locks.sh@15 -- # [[ -z 1901918 ]] 00:04:51.972 17:21:18 -- event/cpu_locks.sh@15 -- # killprocess 1901918 00:04:51.972 17:21:18 -- common/autotest_common.sh@936 -- # '[' -z 1901918 ']' 00:04:51.972 17:21:18 -- common/autotest_common.sh@940 -- # kill -0 1901918 00:04:51.972 17:21:18 -- common/autotest_common.sh@941 -- # uname 00:04:51.972 17:21:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:51.972 17:21:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1901918 00:04:51.972 17:21:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:51.972 17:21:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:51.972 17:21:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1901918' 00:04:51.972 killing process with pid 1901918 00:04:51.972 17:21:18 -- common/autotest_common.sh@955 -- # kill 1901918 00:04:51.972 17:21:18 -- common/autotest_common.sh@960 -- # wait 1901918 00:04:52.541 17:21:18 -- event/cpu_locks.sh@16 -- # [[ -z 1901928 ]] 00:04:52.541 17:21:18 -- event/cpu_locks.sh@16 -- # killprocess 1901928 00:04:52.541 17:21:18 -- common/autotest_common.sh@936 -- # '[' -z 1901928 ']' 00:04:52.541 17:21:18 -- common/autotest_common.sh@940 -- # kill -0 1901928 00:04:52.541 17:21:18 -- common/autotest_common.sh@941 -- # uname 00:04:52.541 17:21:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:52.541 17:21:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1901928 00:04:52.541 17:21:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:52.541 17:21:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:52.541 17:21:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1901928' 00:04:52.541 killing process with pid 1901928 00:04:52.541 17:21:18 -- common/autotest_common.sh@955 -- # kill 1901928 00:04:52.541 17:21:18 -- common/autotest_common.sh@960 -- # wait 1901928 00:04:52.801 17:21:19 -- event/cpu_locks.sh@18 -- # rm -f 00:04:52.801 17:21:19 -- event/cpu_locks.sh@1 -- # cleanup 00:04:52.801 17:21:19 -- event/cpu_locks.sh@15 -- # [[ -z 1901918 ]] 00:04:52.801 17:21:19 -- event/cpu_locks.sh@15 -- # killprocess 1901918 00:04:52.801 17:21:19 -- common/autotest_common.sh@936 -- # '[' -z 1901918 ']' 00:04:52.801 17:21:19 -- common/autotest_common.sh@940 -- # kill -0 1901918 00:04:52.801 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1901918) - No such process 00:04:52.801 17:21:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1901918 is not found' 00:04:52.801 Process with pid 1901918 is not found 00:04:52.801 17:21:19 -- event/cpu_locks.sh@16 -- # [[ -z 1901928 ]] 00:04:52.801 17:21:19 -- event/cpu_locks.sh@16 -- # killprocess 1901928 00:04:52.801 17:21:19 -- common/autotest_common.sh@936 -- # '[' -z 1901928 ']' 00:04:52.801 17:21:19 -- common/autotest_common.sh@940 -- # kill -0 1901928 00:04:52.801 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1901928) - No such process 00:04:52.801 17:21:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1901928 is not found' 00:04:52.801 Process with pid 1901928 is not found 00:04:52.801 17:21:19 -- event/cpu_locks.sh@18 -- # rm -f 00:04:52.801 00:04:52.801 real 0m16.452s 00:04:52.801 user 0m27.727s 00:04:52.801 sys 0m5.572s 00:04:52.801 17:21:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.801 17:21:19 -- common/autotest_common.sh@10 -- # set +x 00:04:52.801 ************************************ 00:04:52.801 END TEST cpu_locks 00:04:52.802 ************************************ 00:04:53.061 00:04:53.061 real 0m41.849s 00:04:53.061 user 1m17.152s 00:04:53.061 sys 0m9.858s 00:04:53.061 17:21:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:53.061 17:21:19 -- common/autotest_common.sh@10 -- # set +x 00:04:53.061 ************************************ 00:04:53.061 END TEST event 00:04:53.061 ************************************ 00:04:53.061 17:21:19 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:04:53.061 17:21:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.061 17:21:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.061 17:21:19 -- common/autotest_common.sh@10 -- # set +x 00:04:53.061 ************************************ 00:04:53.061 START TEST thread 00:04:53.061 ************************************ 00:04:53.061 17:21:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:04:53.061 * Looking for test storage... 00:04:53.061 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:04:53.061 17:21:19 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:53.061 17:21:19 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:04:53.061 17:21:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.061 17:21:19 -- common/autotest_common.sh@10 -- # set +x 00:04:53.320 ************************************ 00:04:53.320 START TEST thread_poller_perf 00:04:53.320 ************************************ 00:04:53.320 17:21:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:53.320 [2024-04-18 17:21:19.634555] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:53.320 [2024-04-18 17:21:19.634621] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1902378 ] 00:04:53.320 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.320 [2024-04-18 17:21:19.696095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.320 [2024-04-18 17:21:19.809140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.320 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:54.701 ====================================== 00:04:54.701 busy:2710421273 (cyc) 00:04:54.701 total_run_count: 291000 00:04:54.701 tsc_hz: 2700000000 (cyc) 00:04:54.701 ====================================== 00:04:54.701 poller_cost: 9314 (cyc), 3449 (nsec) 00:04:54.701 00:04:54.701 real 0m1.321s 00:04:54.701 user 0m1.232s 00:04:54.701 sys 0m0.082s 00:04:54.701 17:21:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.701 17:21:20 -- common/autotest_common.sh@10 -- # set +x 00:04:54.701 ************************************ 00:04:54.701 END TEST thread_poller_perf 00:04:54.702 ************************************ 00:04:54.702 17:21:20 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:54.702 17:21:20 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:04:54.702 17:21:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.702 17:21:20 -- common/autotest_common.sh@10 -- # set +x 00:04:54.702 ************************************ 00:04:54.702 START TEST thread_poller_perf 00:04:54.702 ************************************ 00:04:54.702 17:21:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:54.702 [2024-04-18 17:21:21.073300] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:54.702 [2024-04-18 17:21:21.073366] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1902600 ] 00:04:54.702 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.702 [2024-04-18 17:21:21.136018] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.962 [2024-04-18 17:21:21.252418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.962 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:55.901 ====================================== 00:04:55.901 busy:2702970536 (cyc) 00:04:55.901 total_run_count: 3944000 00:04:55.901 tsc_hz: 2700000000 (cyc) 00:04:55.901 ====================================== 00:04:55.901 poller_cost: 685 (cyc), 253 (nsec) 00:04:55.901 00:04:55.901 real 0m1.317s 00:04:55.901 user 0m1.230s 00:04:55.901 sys 0m0.082s 00:04:55.901 17:21:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:55.901 17:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:55.901 ************************************ 00:04:55.901 END TEST thread_poller_perf 00:04:55.901 ************************************ 00:04:55.901 17:21:22 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:55.901 00:04:55.901 real 0m2.927s 00:04:55.901 user 0m2.578s 00:04:55.901 sys 0m0.325s 00:04:55.901 17:21:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:55.901 17:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:55.901 ************************************ 00:04:55.901 END TEST thread 00:04:55.901 ************************************ 00:04:55.901 17:21:22 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:04:55.901 17:21:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.901 17:21:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.901 17:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:56.159 ************************************ 00:04:56.159 START TEST accel 00:04:56.159 ************************************ 00:04:56.159 17:21:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:04:56.159 * Looking for test storage... 00:04:56.159 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:04:56.159 17:21:22 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:56.159 17:21:22 -- accel/accel.sh@82 -- # get_expected_opcs 00:04:56.159 17:21:22 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:56.159 17:21:22 -- accel/accel.sh@62 -- # spdk_tgt_pid=1902801 00:04:56.159 17:21:22 -- accel/accel.sh@63 -- # waitforlisten 1902801 00:04:56.159 17:21:22 -- common/autotest_common.sh@817 -- # '[' -z 1902801 ']' 00:04:56.159 17:21:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.159 17:21:22 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:56.159 17:21:22 -- accel/accel.sh@61 -- # build_accel_config 00:04:56.159 17:21:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:56.159 17:21:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:56.159 17:21:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.159 17:21:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:56.159 17:21:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:56.159 17:21:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.159 17:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:56.159 17:21:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.159 17:21:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:56.159 17:21:22 -- accel/accel.sh@40 -- # local IFS=, 00:04:56.159 17:21:22 -- accel/accel.sh@41 -- # jq -r . 00:04:56.159 [2024-04-18 17:21:22.625389] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:56.159 [2024-04-18 17:21:22.625496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1902801 ] 00:04:56.159 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.159 [2024-04-18 17:21:22.685172] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.417 [2024-04-18 17:21:22.799077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.356 17:21:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:57.356 17:21:23 -- common/autotest_common.sh@850 -- # return 0 00:04:57.356 17:21:23 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:57.356 17:21:23 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:57.356 17:21:23 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:57.356 17:21:23 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:57.356 17:21:23 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:57.356 17:21:23 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:57.356 17:21:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.356 17:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:57.356 17:21:23 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:57.356 17:21:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.356 17:21:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # IFS== 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:57.356 17:21:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:57.356 17:21:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # IFS== 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:57.356 17:21:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:57.356 17:21:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # IFS== 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:57.356 17:21:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:57.356 17:21:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # IFS== 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:57.356 17:21:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:57.356 17:21:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # IFS== 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:57.356 17:21:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:57.356 17:21:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # IFS== 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:57.356 17:21:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:57.356 17:21:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # IFS== 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:57.356 17:21:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:57.356 17:21:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # IFS== 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:57.356 17:21:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:57.356 17:21:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # IFS== 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:57.356 17:21:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:57.356 17:21:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # IFS== 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:57.356 17:21:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:57.356 17:21:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # IFS== 00:04:57.356 17:21:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:57.356 17:21:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:57.357 17:21:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.357 17:21:23 -- accel/accel.sh@72 -- # IFS== 00:04:57.357 17:21:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:57.357 17:21:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:57.357 17:21:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.357 17:21:23 -- accel/accel.sh@72 -- # IFS== 00:04:57.357 17:21:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:57.357 17:21:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:57.357 17:21:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:57.357 17:21:23 -- accel/accel.sh@72 -- # IFS== 00:04:57.357 17:21:23 -- accel/accel.sh@72 -- # read -r opc module 00:04:57.357 17:21:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:57.357 17:21:23 -- accel/accel.sh@75 -- # killprocess 1902801 00:04:57.357 17:21:23 -- common/autotest_common.sh@936 -- # '[' -z 1902801 ']' 00:04:57.357 17:21:23 -- common/autotest_common.sh@940 -- # kill -0 1902801 00:04:57.357 17:21:23 -- common/autotest_common.sh@941 -- # uname 00:04:57.357 17:21:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:57.357 17:21:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1902801 00:04:57.357 17:21:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:57.357 17:21:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:57.357 17:21:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1902801' 00:04:57.357 killing process with pid 1902801 00:04:57.357 17:21:23 -- common/autotest_common.sh@955 -- # kill 1902801 00:04:57.357 17:21:23 -- common/autotest_common.sh@960 -- # wait 1902801 00:04:57.615 17:21:24 -- accel/accel.sh@76 -- # trap - ERR 00:04:57.615 17:21:24 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:57.615 17:21:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:04:57.615 17:21:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.615 17:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:57.872 17:21:24 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:04:57.872 17:21:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:57.872 17:21:24 -- accel/accel.sh@12 -- # build_accel_config 00:04:57.872 17:21:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.872 17:21:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.872 17:21:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.872 17:21:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.872 17:21:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.872 17:21:24 -- accel/accel.sh@40 -- # local IFS=, 00:04:57.872 17:21:24 -- accel/accel.sh@41 -- # jq -r . 00:04:57.872 17:21:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:57.872 17:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:57.872 17:21:24 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:57.872 17:21:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:57.872 17:21:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.872 17:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:57.872 ************************************ 00:04:57.872 START TEST accel_missing_filename 00:04:57.872 ************************************ 00:04:57.872 17:21:24 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:04:57.872 17:21:24 -- common/autotest_common.sh@638 -- # local es=0 00:04:57.872 17:21:24 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:57.872 17:21:24 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:57.872 17:21:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:57.872 17:21:24 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:57.872 17:21:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:57.872 17:21:24 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:04:57.872 17:21:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:57.872 17:21:24 -- accel/accel.sh@12 -- # build_accel_config 00:04:57.872 17:21:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.872 17:21:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.872 17:21:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.872 17:21:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.872 17:21:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.872 17:21:24 -- accel/accel.sh@40 -- # local IFS=, 00:04:57.872 17:21:24 -- accel/accel.sh@41 -- # jq -r . 00:04:57.872 [2024-04-18 17:21:24.340984] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:57.872 [2024-04-18 17:21:24.341047] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1903110 ] 00:04:57.872 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.872 [2024-04-18 17:21:24.406086] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.131 [2024-04-18 17:21:24.520231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.131 [2024-04-18 17:21:24.581464] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:58.131 [2024-04-18 17:21:24.670312] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:04:58.390 A filename is required. 00:04:58.390 17:21:24 -- common/autotest_common.sh@641 -- # es=234 00:04:58.390 17:21:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:58.390 17:21:24 -- common/autotest_common.sh@650 -- # es=106 00:04:58.390 17:21:24 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:58.390 17:21:24 -- common/autotest_common.sh@658 -- # es=1 00:04:58.390 17:21:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:58.390 00:04:58.390 real 0m0.472s 00:04:58.390 user 0m0.358s 00:04:58.390 sys 0m0.148s 00:04:58.390 17:21:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.390 17:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:58.390 ************************************ 00:04:58.390 END TEST accel_missing_filename 00:04:58.390 ************************************ 00:04:58.390 17:21:24 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:04:58.390 17:21:24 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:04:58.390 17:21:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.390 17:21:24 -- common/autotest_common.sh@10 -- # set +x 00:04:58.390 ************************************ 00:04:58.390 START TEST accel_compress_verify 00:04:58.390 ************************************ 00:04:58.390 17:21:24 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:04:58.390 17:21:24 -- common/autotest_common.sh@638 -- # local es=0 00:04:58.390 17:21:24 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:04:58.390 17:21:24 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:58.390 17:21:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:58.390 17:21:24 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:58.390 17:21:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:58.390 17:21:24 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:04:58.390 17:21:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:04:58.390 17:21:24 -- accel/accel.sh@12 -- # build_accel_config 00:04:58.390 17:21:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:58.390 17:21:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:58.390 17:21:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:58.390 17:21:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:58.390 17:21:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:58.390 17:21:24 -- accel/accel.sh@40 -- # local IFS=, 00:04:58.390 17:21:24 -- accel/accel.sh@41 -- # jq -r . 00:04:58.649 [2024-04-18 17:21:24.938912] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:58.649 [2024-04-18 17:21:24.938965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1903142 ] 00:04:58.649 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.649 [2024-04-18 17:21:25.000629] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.649 [2024-04-18 17:21:25.117818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.649 [2024-04-18 17:21:25.179108] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:58.907 [2024-04-18 17:21:25.265705] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:04:58.907 00:04:58.907 Compression does not support the verify option, aborting. 00:04:58.907 17:21:25 -- common/autotest_common.sh@641 -- # es=161 00:04:58.907 17:21:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:58.907 17:21:25 -- common/autotest_common.sh@650 -- # es=33 00:04:58.907 17:21:25 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:58.907 17:21:25 -- common/autotest_common.sh@658 -- # es=1 00:04:58.907 17:21:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:58.907 00:04:58.907 real 0m0.466s 00:04:58.907 user 0m0.359s 00:04:58.907 sys 0m0.137s 00:04:58.907 17:21:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.907 17:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:58.907 ************************************ 00:04:58.907 END TEST accel_compress_verify 00:04:58.907 ************************************ 00:04:58.907 17:21:25 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:58.907 17:21:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:04:58.907 17:21:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.907 17:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:59.167 ************************************ 00:04:59.167 START TEST accel_wrong_workload 00:04:59.167 ************************************ 00:04:59.167 17:21:25 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:04:59.167 17:21:25 -- common/autotest_common.sh@638 -- # local es=0 00:04:59.167 17:21:25 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:59.167 17:21:25 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:59.168 17:21:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:59.168 17:21:25 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:59.168 17:21:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:59.168 17:21:25 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:04:59.168 17:21:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:59.168 17:21:25 -- accel/accel.sh@12 -- # build_accel_config 00:04:59.168 17:21:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.168 17:21:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.168 17:21:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.168 17:21:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.168 17:21:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.168 17:21:25 -- accel/accel.sh@40 -- # local IFS=, 00:04:59.168 17:21:25 -- accel/accel.sh@41 -- # jq -r . 00:04:59.168 Unsupported workload type: foobar 00:04:59.168 [2024-04-18 17:21:25.520868] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:59.168 accel_perf options: 00:04:59.168 [-h help message] 00:04:59.168 [-q queue depth per core] 00:04:59.168 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:59.168 [-T number of threads per core 00:04:59.168 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:59.168 [-t time in seconds] 00:04:59.168 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:59.168 [ dif_verify, , dif_generate, dif_generate_copy 00:04:59.168 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:59.168 [-l for compress/decompress workloads, name of uncompressed input file 00:04:59.168 [-S for crc32c workload, use this seed value (default 0) 00:04:59.168 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:59.168 [-f for fill workload, use this BYTE value (default 255) 00:04:59.168 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:59.168 [-y verify result if this switch is on] 00:04:59.168 [-a tasks to allocate per core (default: same value as -q)] 00:04:59.168 Can be used to spread operations across a wider range of memory. 00:04:59.168 17:21:25 -- common/autotest_common.sh@641 -- # es=1 00:04:59.168 17:21:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:59.168 17:21:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:59.168 17:21:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:59.168 00:04:59.168 real 0m0.023s 00:04:59.168 user 0m0.012s 00:04:59.168 sys 0m0.012s 00:04:59.168 17:21:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.168 17:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:59.168 ************************************ 00:04:59.168 END TEST accel_wrong_workload 00:04:59.168 ************************************ 00:04:59.168 Error: writing output failed: Broken pipe 00:04:59.168 17:21:25 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:59.168 17:21:25 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:04:59.168 17:21:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.168 17:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:59.168 ************************************ 00:04:59.168 START TEST accel_negative_buffers 00:04:59.168 ************************************ 00:04:59.168 17:21:25 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:59.168 17:21:25 -- common/autotest_common.sh@638 -- # local es=0 00:04:59.168 17:21:25 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:59.168 17:21:25 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:59.168 17:21:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:59.168 17:21:25 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:59.168 17:21:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:59.168 17:21:25 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:04:59.168 17:21:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:59.168 17:21:25 -- accel/accel.sh@12 -- # build_accel_config 00:04:59.168 17:21:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.168 17:21:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.168 17:21:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.168 17:21:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.168 17:21:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.168 17:21:25 -- accel/accel.sh@40 -- # local IFS=, 00:04:59.168 17:21:25 -- accel/accel.sh@41 -- # jq -r . 00:04:59.168 -x option must be non-negative. 00:04:59.168 [2024-04-18 17:21:25.649308] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:59.168 accel_perf options: 00:04:59.168 [-h help message] 00:04:59.168 [-q queue depth per core] 00:04:59.168 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:59.168 [-T number of threads per core 00:04:59.168 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:59.168 [-t time in seconds] 00:04:59.168 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:59.168 [ dif_verify, , dif_generate, dif_generate_copy 00:04:59.168 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:59.168 [-l for compress/decompress workloads, name of uncompressed input file 00:04:59.168 [-S for crc32c workload, use this seed value (default 0) 00:04:59.168 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:59.168 [-f for fill workload, use this BYTE value (default 255) 00:04:59.168 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:59.168 [-y verify result if this switch is on] 00:04:59.168 [-a tasks to allocate per core (default: same value as -q)] 00:04:59.168 Can be used to spread operations across a wider range of memory. 00:04:59.168 17:21:25 -- common/autotest_common.sh@641 -- # es=1 00:04:59.168 17:21:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:59.168 17:21:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:59.168 17:21:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:59.168 00:04:59.168 real 0m0.021s 00:04:59.168 user 0m0.013s 00:04:59.168 sys 0m0.008s 00:04:59.168 17:21:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.168 17:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:59.168 ************************************ 00:04:59.168 END TEST accel_negative_buffers 00:04:59.168 ************************************ 00:04:59.168 Error: writing output failed: Broken pipe 00:04:59.168 17:21:25 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:59.168 17:21:25 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:04:59.168 17:21:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.168 17:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:59.428 ************************************ 00:04:59.428 START TEST accel_crc32c 00:04:59.428 ************************************ 00:04:59.428 17:21:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:59.428 17:21:25 -- accel/accel.sh@16 -- # local accel_opc 00:04:59.428 17:21:25 -- accel/accel.sh@17 -- # local accel_module 00:04:59.428 17:21:25 -- accel/accel.sh@19 -- # IFS=: 00:04:59.428 17:21:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:59.428 17:21:25 -- accel/accel.sh@19 -- # read -r var val 00:04:59.428 17:21:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:59.428 17:21:25 -- accel/accel.sh@12 -- # build_accel_config 00:04:59.428 17:21:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.428 17:21:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.428 17:21:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.428 17:21:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.428 17:21:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.428 17:21:25 -- accel/accel.sh@40 -- # local IFS=, 00:04:59.428 17:21:25 -- accel/accel.sh@41 -- # jq -r . 00:04:59.428 [2024-04-18 17:21:25.788259] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:04:59.428 [2024-04-18 17:21:25.788308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1903354 ] 00:04:59.428 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.428 [2024-04-18 17:21:25.849212] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.428 [2024-04-18 17:21:25.965974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.688 17:21:26 -- accel/accel.sh@20 -- # val= 00:04:59.688 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.688 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.688 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:04:59.688 17:21:26 -- accel/accel.sh@20 -- # val= 00:04:59.688 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.688 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.688 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:04:59.688 17:21:26 -- accel/accel.sh@20 -- # val=0x1 00:04:59.688 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.688 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.688 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:04:59.688 17:21:26 -- accel/accel.sh@20 -- # val= 00:04:59.688 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.688 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.688 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:04:59.688 17:21:26 -- accel/accel.sh@20 -- # val= 00:04:59.688 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.688 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.688 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:04:59.688 17:21:26 -- accel/accel.sh@20 -- # val=crc32c 00:04:59.688 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.688 17:21:26 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:59.688 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.688 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:04:59.688 17:21:26 -- accel/accel.sh@20 -- # val=32 00:04:59.688 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.688 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.688 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:04:59.688 17:21:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.688 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.688 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.688 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:04:59.688 17:21:26 -- accel/accel.sh@20 -- # val= 00:04:59.688 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:04:59.689 17:21:26 -- accel/accel.sh@20 -- # val=software 00:04:59.689 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.689 17:21:26 -- accel/accel.sh@22 -- # accel_module=software 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:04:59.689 17:21:26 -- accel/accel.sh@20 -- # val=32 00:04:59.689 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:04:59.689 17:21:26 -- accel/accel.sh@20 -- # val=32 00:04:59.689 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:04:59.689 17:21:26 -- accel/accel.sh@20 -- # val=1 00:04:59.689 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:04:59.689 17:21:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:59.689 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:04:59.689 17:21:26 -- accel/accel.sh@20 -- # val=Yes 00:04:59.689 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:04:59.689 17:21:26 -- accel/accel.sh@20 -- # val= 00:04:59.689 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:04:59.689 17:21:26 -- accel/accel.sh@20 -- # val= 00:04:59.689 17:21:26 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # IFS=: 00:04:59.689 17:21:26 -- accel/accel.sh@19 -- # read -r var val 00:05:01.065 17:21:27 -- accel/accel.sh@20 -- # val= 00:05:01.065 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.065 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.065 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.065 17:21:27 -- accel/accel.sh@20 -- # val= 00:05:01.065 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.065 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.065 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.065 17:21:27 -- accel/accel.sh@20 -- # val= 00:05:01.065 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.065 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.065 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.065 17:21:27 -- accel/accel.sh@20 -- # val= 00:05:01.065 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.065 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.065 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.065 17:21:27 -- accel/accel.sh@20 -- # val= 00:05:01.065 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.065 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.065 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.065 17:21:27 -- accel/accel.sh@20 -- # val= 00:05:01.065 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.065 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.065 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.065 17:21:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:01.065 17:21:27 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:01.065 17:21:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:01.065 00:05:01.065 real 0m1.470s 00:05:01.065 user 0m1.321s 00:05:01.065 sys 0m0.150s 00:05:01.065 17:21:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:01.065 17:21:27 -- common/autotest_common.sh@10 -- # set +x 00:05:01.065 ************************************ 00:05:01.065 END TEST accel_crc32c 00:05:01.065 ************************************ 00:05:01.065 17:21:27 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:01.065 17:21:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:01.065 17:21:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.065 17:21:27 -- common/autotest_common.sh@10 -- # set +x 00:05:01.065 ************************************ 00:05:01.065 START TEST accel_crc32c_C2 00:05:01.065 ************************************ 00:05:01.065 17:21:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:01.065 17:21:27 -- accel/accel.sh@16 -- # local accel_opc 00:05:01.065 17:21:27 -- accel/accel.sh@17 -- # local accel_module 00:05:01.065 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.065 17:21:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:01.065 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.065 17:21:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:01.065 17:21:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:01.065 17:21:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.065 17:21:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.065 17:21:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.065 17:21:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.065 17:21:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.065 17:21:27 -- accel/accel.sh@40 -- # local IFS=, 00:05:01.065 17:21:27 -- accel/accel.sh@41 -- # jq -r . 00:05:01.065 [2024-04-18 17:21:27.382897] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:01.065 [2024-04-18 17:21:27.382947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1903518 ] 00:05:01.065 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.065 [2024-04-18 17:21:27.442733] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.065 [2024-04-18 17:21:27.563775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.324 17:21:27 -- accel/accel.sh@20 -- # val= 00:05:01.324 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.324 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.324 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.325 17:21:27 -- accel/accel.sh@20 -- # val= 00:05:01.325 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.325 17:21:27 -- accel/accel.sh@20 -- # val=0x1 00:05:01.325 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.325 17:21:27 -- accel/accel.sh@20 -- # val= 00:05:01.325 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.325 17:21:27 -- accel/accel.sh@20 -- # val= 00:05:01.325 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.325 17:21:27 -- accel/accel.sh@20 -- # val=crc32c 00:05:01.325 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.325 17:21:27 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.325 17:21:27 -- accel/accel.sh@20 -- # val=0 00:05:01.325 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.325 17:21:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:01.325 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.325 17:21:27 -- accel/accel.sh@20 -- # val= 00:05:01.325 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.325 17:21:27 -- accel/accel.sh@20 -- # val=software 00:05:01.325 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.325 17:21:27 -- accel/accel.sh@22 -- # accel_module=software 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.325 17:21:27 -- accel/accel.sh@20 -- # val=32 00:05:01.325 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.325 17:21:27 -- accel/accel.sh@20 -- # val=32 00:05:01.325 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.325 17:21:27 -- accel/accel.sh@20 -- # val=1 00:05:01.325 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.325 17:21:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:01.325 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.325 17:21:27 -- accel/accel.sh@20 -- # val=Yes 00:05:01.325 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.325 17:21:27 -- accel/accel.sh@20 -- # val= 00:05:01.325 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:01.325 17:21:27 -- accel/accel.sh@20 -- # val= 00:05:01.325 17:21:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # IFS=: 00:05:01.325 17:21:27 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:28 -- accel/accel.sh@20 -- # val= 00:05:02.706 17:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:28 -- accel/accel.sh@20 -- # val= 00:05:02.706 17:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:28 -- accel/accel.sh@20 -- # val= 00:05:02.706 17:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:28 -- accel/accel.sh@20 -- # val= 00:05:02.706 17:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:28 -- accel/accel.sh@20 -- # val= 00:05:02.706 17:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:28 -- accel/accel.sh@20 -- # val= 00:05:02.706 17:21:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:02.706 17:21:28 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:02.706 17:21:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:02.706 00:05:02.706 real 0m1.469s 00:05:02.706 user 0m1.331s 00:05:02.706 sys 0m0.139s 00:05:02.706 17:21:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:02.706 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:05:02.706 ************************************ 00:05:02.706 END TEST accel_crc32c_C2 00:05:02.706 ************************************ 00:05:02.706 17:21:28 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:02.706 17:21:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:02.706 17:21:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.706 17:21:28 -- common/autotest_common.sh@10 -- # set +x 00:05:02.706 ************************************ 00:05:02.706 START TEST accel_copy 00:05:02.706 ************************************ 00:05:02.706 17:21:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:05:02.706 17:21:28 -- accel/accel.sh@16 -- # local accel_opc 00:05:02.706 17:21:28 -- accel/accel.sh@17 -- # local accel_module 00:05:02.706 17:21:28 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:02.706 17:21:28 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:02.706 17:21:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:02.706 17:21:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:02.706 17:21:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:02.706 17:21:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.706 17:21:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.706 17:21:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:02.706 17:21:28 -- accel/accel.sh@40 -- # local IFS=, 00:05:02.706 17:21:28 -- accel/accel.sh@41 -- # jq -r . 00:05:02.706 [2024-04-18 17:21:28.974825] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:02.706 [2024-04-18 17:21:28.974876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1903801 ] 00:05:02.706 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.706 [2024-04-18 17:21:29.035119] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.706 [2024-04-18 17:21:29.149267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.706 17:21:29 -- accel/accel.sh@20 -- # val= 00:05:02.706 17:21:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:29 -- accel/accel.sh@20 -- # val= 00:05:02.706 17:21:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:29 -- accel/accel.sh@20 -- # val=0x1 00:05:02.706 17:21:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:29 -- accel/accel.sh@20 -- # val= 00:05:02.706 17:21:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:29 -- accel/accel.sh@20 -- # val= 00:05:02.706 17:21:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:29 -- accel/accel.sh@20 -- # val=copy 00:05:02.706 17:21:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:29 -- accel/accel.sh@23 -- # accel_opc=copy 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:02.706 17:21:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:29 -- accel/accel.sh@20 -- # val= 00:05:02.706 17:21:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:29 -- accel/accel.sh@20 -- # val=software 00:05:02.706 17:21:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:29 -- accel/accel.sh@22 -- # accel_module=software 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:29 -- accel/accel.sh@20 -- # val=32 00:05:02.706 17:21:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:29 -- accel/accel.sh@20 -- # val=32 00:05:02.706 17:21:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:29 -- accel/accel.sh@20 -- # val=1 00:05:02.706 17:21:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # read -r var val 00:05:02.706 17:21:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:02.706 17:21:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # IFS=: 00:05:02.706 17:21:29 -- accel/accel.sh@19 -- # read -r var val 00:05:02.707 17:21:29 -- accel/accel.sh@20 -- # val=Yes 00:05:02.707 17:21:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.707 17:21:29 -- accel/accel.sh@19 -- # IFS=: 00:05:02.707 17:21:29 -- accel/accel.sh@19 -- # read -r var val 00:05:02.707 17:21:29 -- accel/accel.sh@20 -- # val= 00:05:02.707 17:21:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.707 17:21:29 -- accel/accel.sh@19 -- # IFS=: 00:05:02.707 17:21:29 -- accel/accel.sh@19 -- # read -r var val 00:05:02.707 17:21:29 -- accel/accel.sh@20 -- # val= 00:05:02.707 17:21:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:02.707 17:21:29 -- accel/accel.sh@19 -- # IFS=: 00:05:02.707 17:21:29 -- accel/accel.sh@19 -- # read -r var val 00:05:04.083 17:21:30 -- accel/accel.sh@20 -- # val= 00:05:04.083 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.083 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.083 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.083 17:21:30 -- accel/accel.sh@20 -- # val= 00:05:04.083 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.083 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.083 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.083 17:21:30 -- accel/accel.sh@20 -- # val= 00:05:04.083 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.083 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.083 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.083 17:21:30 -- accel/accel.sh@20 -- # val= 00:05:04.083 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.083 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.083 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.083 17:21:30 -- accel/accel.sh@20 -- # val= 00:05:04.083 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.083 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.083 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.083 17:21:30 -- accel/accel.sh@20 -- # val= 00:05:04.083 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.083 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.083 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.083 17:21:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:04.083 17:21:30 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:04.083 17:21:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:04.083 00:05:04.083 real 0m1.462s 00:05:04.083 user 0m1.324s 00:05:04.083 sys 0m0.138s 00:05:04.083 17:21:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.083 17:21:30 -- common/autotest_common.sh@10 -- # set +x 00:05:04.083 ************************************ 00:05:04.084 END TEST accel_copy 00:05:04.084 ************************************ 00:05:04.084 17:21:30 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:04.084 17:21:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:04.084 17:21:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.084 17:21:30 -- common/autotest_common.sh@10 -- # set +x 00:05:04.084 ************************************ 00:05:04.084 START TEST accel_fill 00:05:04.084 ************************************ 00:05:04.084 17:21:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:04.084 17:21:30 -- accel/accel.sh@16 -- # local accel_opc 00:05:04.084 17:21:30 -- accel/accel.sh@17 -- # local accel_module 00:05:04.084 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.084 17:21:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:04.084 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.084 17:21:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:04.084 17:21:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:04.084 17:21:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:04.084 17:21:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:04.084 17:21:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.084 17:21:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.084 17:21:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:04.084 17:21:30 -- accel/accel.sh@40 -- # local IFS=, 00:05:04.084 17:21:30 -- accel/accel.sh@41 -- # jq -r . 00:05:04.084 [2024-04-18 17:21:30.557691] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:04.084 [2024-04-18 17:21:30.557753] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1903965 ] 00:05:04.084 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.084 [2024-04-18 17:21:30.619434] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.347 [2024-04-18 17:21:30.736336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val= 00:05:04.347 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val= 00:05:04.347 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val=0x1 00:05:04.347 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val= 00:05:04.347 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val= 00:05:04.347 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val=fill 00:05:04.347 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.347 17:21:30 -- accel/accel.sh@23 -- # accel_opc=fill 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val=0x80 00:05:04.347 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:04.347 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val= 00:05:04.347 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val=software 00:05:04.347 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.347 17:21:30 -- accel/accel.sh@22 -- # accel_module=software 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val=64 00:05:04.347 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val=64 00:05:04.347 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val=1 00:05:04.347 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:04.347 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val=Yes 00:05:04.347 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val= 00:05:04.347 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.347 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:04.347 17:21:30 -- accel/accel.sh@20 -- # val= 00:05:04.348 17:21:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:04.348 17:21:30 -- accel/accel.sh@19 -- # IFS=: 00:05:04.348 17:21:30 -- accel/accel.sh@19 -- # read -r var val 00:05:05.775 17:21:32 -- accel/accel.sh@20 -- # val= 00:05:05.775 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.775 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:05.775 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:05.775 17:21:32 -- accel/accel.sh@20 -- # val= 00:05:05.775 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.775 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:05.775 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:05.775 17:21:32 -- accel/accel.sh@20 -- # val= 00:05:05.775 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.775 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:05.775 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:05.775 17:21:32 -- accel/accel.sh@20 -- # val= 00:05:05.775 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.775 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:05.775 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:05.775 17:21:32 -- accel/accel.sh@20 -- # val= 00:05:05.775 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.775 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:05.775 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:05.775 17:21:32 -- accel/accel.sh@20 -- # val= 00:05:05.775 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.775 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:05.775 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:05.775 17:21:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:05.775 17:21:32 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:05.775 17:21:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:05.775 00:05:05.775 real 0m1.474s 00:05:05.775 user 0m1.332s 00:05:05.775 sys 0m0.144s 00:05:05.775 17:21:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.775 17:21:32 -- common/autotest_common.sh@10 -- # set +x 00:05:05.775 ************************************ 00:05:05.775 END TEST accel_fill 00:05:05.775 ************************************ 00:05:05.775 17:21:32 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:05.775 17:21:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:05.775 17:21:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.775 17:21:32 -- common/autotest_common.sh@10 -- # set +x 00:05:05.775 ************************************ 00:05:05.775 START TEST accel_copy_crc32c 00:05:05.775 ************************************ 00:05:05.775 17:21:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:05:05.775 17:21:32 -- accel/accel.sh@16 -- # local accel_opc 00:05:05.775 17:21:32 -- accel/accel.sh@17 -- # local accel_module 00:05:05.775 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:05.775 17:21:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:05.775 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:05.775 17:21:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:05.775 17:21:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:05.775 17:21:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.775 17:21:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.775 17:21:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.775 17:21:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.775 17:21:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.775 17:21:32 -- accel/accel.sh@40 -- # local IFS=, 00:05:05.775 17:21:32 -- accel/accel.sh@41 -- # jq -r . 00:05:05.775 [2024-04-18 17:21:32.155728] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:05.775 [2024-04-18 17:21:32.155792] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1904249 ] 00:05:05.775 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.775 [2024-04-18 17:21:32.218658] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.036 [2024-04-18 17:21:32.334343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val= 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val= 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val=0x1 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val= 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val= 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val=0 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val= 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val=software 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@22 -- # accel_module=software 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val=32 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val=32 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val=1 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val=Yes 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val= 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:06.036 17:21:32 -- accel/accel.sh@20 -- # val= 00:05:06.036 17:21:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # IFS=: 00:05:06.036 17:21:32 -- accel/accel.sh@19 -- # read -r var val 00:05:07.421 17:21:33 -- accel/accel.sh@20 -- # val= 00:05:07.421 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.421 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.421 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.421 17:21:33 -- accel/accel.sh@20 -- # val= 00:05:07.421 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.421 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.421 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.421 17:21:33 -- accel/accel.sh@20 -- # val= 00:05:07.421 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.421 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.421 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.421 17:21:33 -- accel/accel.sh@20 -- # val= 00:05:07.421 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.421 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.421 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.421 17:21:33 -- accel/accel.sh@20 -- # val= 00:05:07.421 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.421 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.421 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.421 17:21:33 -- accel/accel.sh@20 -- # val= 00:05:07.421 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.421 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.421 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.421 17:21:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:07.421 17:21:33 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:07.421 17:21:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:07.421 00:05:07.421 real 0m1.475s 00:05:07.421 user 0m1.340s 00:05:07.421 sys 0m0.136s 00:05:07.421 17:21:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.421 17:21:33 -- common/autotest_common.sh@10 -- # set +x 00:05:07.421 ************************************ 00:05:07.421 END TEST accel_copy_crc32c 00:05:07.421 ************************************ 00:05:07.421 17:21:33 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:07.421 17:21:33 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:07.421 17:21:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.421 17:21:33 -- common/autotest_common.sh@10 -- # set +x 00:05:07.421 ************************************ 00:05:07.421 START TEST accel_copy_crc32c_C2 00:05:07.421 ************************************ 00:05:07.421 17:21:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:07.421 17:21:33 -- accel/accel.sh@16 -- # local accel_opc 00:05:07.421 17:21:33 -- accel/accel.sh@17 -- # local accel_module 00:05:07.421 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.421 17:21:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:07.421 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.421 17:21:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:07.421 17:21:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.421 17:21:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.421 17:21:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.421 17:21:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.421 17:21:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.421 17:21:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.421 17:21:33 -- accel/accel.sh@40 -- # local IFS=, 00:05:07.421 17:21:33 -- accel/accel.sh@41 -- # jq -r . 00:05:07.421 [2024-04-18 17:21:33.739881] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:07.421 [2024-04-18 17:21:33.739932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1904418 ] 00:05:07.421 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.421 [2024-04-18 17:21:33.800191] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.421 [2024-04-18 17:21:33.916831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.681 17:21:33 -- accel/accel.sh@20 -- # val= 00:05:07.681 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.681 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.681 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.681 17:21:33 -- accel/accel.sh@20 -- # val= 00:05:07.681 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.681 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.681 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.681 17:21:33 -- accel/accel.sh@20 -- # val=0x1 00:05:07.681 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.681 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.681 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.681 17:21:33 -- accel/accel.sh@20 -- # val= 00:05:07.681 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.681 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.681 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.681 17:21:33 -- accel/accel.sh@20 -- # val= 00:05:07.681 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.681 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.681 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.681 17:21:33 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:07.681 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.681 17:21:33 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:07.681 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.681 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.681 17:21:33 -- accel/accel.sh@20 -- # val=0 00:05:07.681 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.681 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.681 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.682 17:21:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.682 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.682 17:21:33 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:07.682 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.682 17:21:33 -- accel/accel.sh@20 -- # val= 00:05:07.682 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.682 17:21:33 -- accel/accel.sh@20 -- # val=software 00:05:07.682 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.682 17:21:33 -- accel/accel.sh@22 -- # accel_module=software 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.682 17:21:33 -- accel/accel.sh@20 -- # val=32 00:05:07.682 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.682 17:21:33 -- accel/accel.sh@20 -- # val=32 00:05:07.682 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.682 17:21:33 -- accel/accel.sh@20 -- # val=1 00:05:07.682 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.682 17:21:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:07.682 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.682 17:21:33 -- accel/accel.sh@20 -- # val=Yes 00:05:07.682 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.682 17:21:33 -- accel/accel.sh@20 -- # val= 00:05:07.682 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:07.682 17:21:33 -- accel/accel.sh@20 -- # val= 00:05:07.682 17:21:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # IFS=: 00:05:07.682 17:21:33 -- accel/accel.sh@19 -- # read -r var val 00:05:09.062 17:21:35 -- accel/accel.sh@20 -- # val= 00:05:09.062 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.062 17:21:35 -- accel/accel.sh@20 -- # val= 00:05:09.062 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.062 17:21:35 -- accel/accel.sh@20 -- # val= 00:05:09.062 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.062 17:21:35 -- accel/accel.sh@20 -- # val= 00:05:09.062 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.062 17:21:35 -- accel/accel.sh@20 -- # val= 00:05:09.062 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.062 17:21:35 -- accel/accel.sh@20 -- # val= 00:05:09.062 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.062 17:21:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:09.062 17:21:35 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:09.062 17:21:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.062 00:05:09.062 real 0m1.473s 00:05:09.062 user 0m1.335s 00:05:09.062 sys 0m0.139s 00:05:09.062 17:21:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:09.062 17:21:35 -- common/autotest_common.sh@10 -- # set +x 00:05:09.062 ************************************ 00:05:09.062 END TEST accel_copy_crc32c_C2 00:05:09.062 ************************************ 00:05:09.062 17:21:35 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:09.062 17:21:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:09.062 17:21:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.062 17:21:35 -- common/autotest_common.sh@10 -- # set +x 00:05:09.062 ************************************ 00:05:09.062 START TEST accel_dualcast 00:05:09.062 ************************************ 00:05:09.062 17:21:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:05:09.062 17:21:35 -- accel/accel.sh@16 -- # local accel_opc 00:05:09.062 17:21:35 -- accel/accel.sh@17 -- # local accel_module 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.062 17:21:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.062 17:21:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:09.062 17:21:35 -- accel/accel.sh@12 -- # build_accel_config 00:05:09.062 17:21:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.062 17:21:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.062 17:21:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.062 17:21:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.062 17:21:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.062 17:21:35 -- accel/accel.sh@40 -- # local IFS=, 00:05:09.062 17:21:35 -- accel/accel.sh@41 -- # jq -r . 00:05:09.062 [2024-04-18 17:21:35.337081] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:09.062 [2024-04-18 17:21:35.337145] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1904593 ] 00:05:09.062 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.062 [2024-04-18 17:21:35.399974] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.062 [2024-04-18 17:21:35.515706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.062 17:21:35 -- accel/accel.sh@20 -- # val= 00:05:09.062 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.062 17:21:35 -- accel/accel.sh@20 -- # val= 00:05:09.062 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.062 17:21:35 -- accel/accel.sh@20 -- # val=0x1 00:05:09.062 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.062 17:21:35 -- accel/accel.sh@20 -- # val= 00:05:09.062 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.062 17:21:35 -- accel/accel.sh@20 -- # val= 00:05:09.062 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.062 17:21:35 -- accel/accel.sh@20 -- # val=dualcast 00:05:09.062 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.062 17:21:35 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.062 17:21:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.062 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.062 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.063 17:21:35 -- accel/accel.sh@20 -- # val= 00:05:09.063 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.063 17:21:35 -- accel/accel.sh@20 -- # val=software 00:05:09.063 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.063 17:21:35 -- accel/accel.sh@22 -- # accel_module=software 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.063 17:21:35 -- accel/accel.sh@20 -- # val=32 00:05:09.063 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.063 17:21:35 -- accel/accel.sh@20 -- # val=32 00:05:09.063 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.063 17:21:35 -- accel/accel.sh@20 -- # val=1 00:05:09.063 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.063 17:21:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.063 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.063 17:21:35 -- accel/accel.sh@20 -- # val=Yes 00:05:09.063 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.063 17:21:35 -- accel/accel.sh@20 -- # val= 00:05:09.063 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:09.063 17:21:35 -- accel/accel.sh@20 -- # val= 00:05:09.063 17:21:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # IFS=: 00:05:09.063 17:21:35 -- accel/accel.sh@19 -- # read -r var val 00:05:10.440 17:21:36 -- accel/accel.sh@20 -- # val= 00:05:10.440 17:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.440 17:21:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.440 17:21:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.440 17:21:36 -- accel/accel.sh@20 -- # val= 00:05:10.440 17:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.440 17:21:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.440 17:21:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.440 17:21:36 -- accel/accel.sh@20 -- # val= 00:05:10.440 17:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.440 17:21:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.440 17:21:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.440 17:21:36 -- accel/accel.sh@20 -- # val= 00:05:10.441 17:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.441 17:21:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.441 17:21:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.441 17:21:36 -- accel/accel.sh@20 -- # val= 00:05:10.441 17:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.441 17:21:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.441 17:21:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.441 17:21:36 -- accel/accel.sh@20 -- # val= 00:05:10.441 17:21:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.441 17:21:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.441 17:21:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.441 17:21:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.441 17:21:36 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:10.441 17:21:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.441 00:05:10.441 real 0m1.475s 00:05:10.441 user 0m1.330s 00:05:10.441 sys 0m0.145s 00:05:10.441 17:21:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.441 17:21:36 -- common/autotest_common.sh@10 -- # set +x 00:05:10.441 ************************************ 00:05:10.441 END TEST accel_dualcast 00:05:10.441 ************************************ 00:05:10.441 17:21:36 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:10.441 17:21:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:10.441 17:21:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.441 17:21:36 -- common/autotest_common.sh@10 -- # set +x 00:05:10.441 ************************************ 00:05:10.441 START TEST accel_compare 00:05:10.441 ************************************ 00:05:10.441 17:21:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:05:10.441 17:21:36 -- accel/accel.sh@16 -- # local accel_opc 00:05:10.441 17:21:36 -- accel/accel.sh@17 -- # local accel_module 00:05:10.441 17:21:36 -- accel/accel.sh@19 -- # IFS=: 00:05:10.441 17:21:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:10.441 17:21:36 -- accel/accel.sh@19 -- # read -r var val 00:05:10.441 17:21:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:10.441 17:21:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:10.441 17:21:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.441 17:21:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.441 17:21:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.441 17:21:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.441 17:21:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.441 17:21:36 -- accel/accel.sh@40 -- # local IFS=, 00:05:10.441 17:21:36 -- accel/accel.sh@41 -- # jq -r . 00:05:10.441 [2024-04-18 17:21:36.932742] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:10.441 [2024-04-18 17:21:36.932805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1904861 ] 00:05:10.441 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.701 [2024-04-18 17:21:36.993607] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.701 [2024-04-18 17:21:37.107646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.701 17:21:37 -- accel/accel.sh@20 -- # val= 00:05:10.701 17:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # IFS=: 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # read -r var val 00:05:10.701 17:21:37 -- accel/accel.sh@20 -- # val= 00:05:10.701 17:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # IFS=: 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # read -r var val 00:05:10.701 17:21:37 -- accel/accel.sh@20 -- # val=0x1 00:05:10.701 17:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # IFS=: 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # read -r var val 00:05:10.701 17:21:37 -- accel/accel.sh@20 -- # val= 00:05:10.701 17:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # IFS=: 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # read -r var val 00:05:10.701 17:21:37 -- accel/accel.sh@20 -- # val= 00:05:10.701 17:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # IFS=: 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # read -r var val 00:05:10.701 17:21:37 -- accel/accel.sh@20 -- # val=compare 00:05:10.701 17:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.701 17:21:37 -- accel/accel.sh@23 -- # accel_opc=compare 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # IFS=: 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # read -r var val 00:05:10.701 17:21:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:10.701 17:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # IFS=: 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # read -r var val 00:05:10.701 17:21:37 -- accel/accel.sh@20 -- # val= 00:05:10.701 17:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # IFS=: 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # read -r var val 00:05:10.701 17:21:37 -- accel/accel.sh@20 -- # val=software 00:05:10.701 17:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.701 17:21:37 -- accel/accel.sh@22 -- # accel_module=software 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # IFS=: 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # read -r var val 00:05:10.701 17:21:37 -- accel/accel.sh@20 -- # val=32 00:05:10.701 17:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # IFS=: 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # read -r var val 00:05:10.701 17:21:37 -- accel/accel.sh@20 -- # val=32 00:05:10.701 17:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # IFS=: 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # read -r var val 00:05:10.701 17:21:37 -- accel/accel.sh@20 -- # val=1 00:05:10.701 17:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # IFS=: 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # read -r var val 00:05:10.701 17:21:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:10.701 17:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # IFS=: 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # read -r var val 00:05:10.701 17:21:37 -- accel/accel.sh@20 -- # val=Yes 00:05:10.701 17:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # IFS=: 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # read -r var val 00:05:10.701 17:21:37 -- accel/accel.sh@20 -- # val= 00:05:10.701 17:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # IFS=: 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # read -r var val 00:05:10.701 17:21:37 -- accel/accel.sh@20 -- # val= 00:05:10.701 17:21:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # IFS=: 00:05:10.701 17:21:37 -- accel/accel.sh@19 -- # read -r var val 00:05:12.081 17:21:38 -- accel/accel.sh@20 -- # val= 00:05:12.081 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.081 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.081 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.081 17:21:38 -- accel/accel.sh@20 -- # val= 00:05:12.081 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.081 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.081 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.081 17:21:38 -- accel/accel.sh@20 -- # val= 00:05:12.081 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.081 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.081 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.081 17:21:38 -- accel/accel.sh@20 -- # val= 00:05:12.081 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.081 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.081 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.081 17:21:38 -- accel/accel.sh@20 -- # val= 00:05:12.081 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.081 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.081 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.081 17:21:38 -- accel/accel.sh@20 -- # val= 00:05:12.081 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.081 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.081 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.081 17:21:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.081 17:21:38 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:12.081 17:21:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.081 00:05:12.081 real 0m1.464s 00:05:12.081 user 0m1.319s 00:05:12.081 sys 0m0.146s 00:05:12.081 17:21:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:12.081 17:21:38 -- common/autotest_common.sh@10 -- # set +x 00:05:12.081 ************************************ 00:05:12.081 END TEST accel_compare 00:05:12.081 ************************************ 00:05:12.081 17:21:38 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:12.081 17:21:38 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:12.081 17:21:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.081 17:21:38 -- common/autotest_common.sh@10 -- # set +x 00:05:12.081 ************************************ 00:05:12.081 START TEST accel_xor 00:05:12.081 ************************************ 00:05:12.081 17:21:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:05:12.081 17:21:38 -- accel/accel.sh@16 -- # local accel_opc 00:05:12.081 17:21:38 -- accel/accel.sh@17 -- # local accel_module 00:05:12.081 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.082 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.082 17:21:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:12.082 17:21:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:12.082 17:21:38 -- accel/accel.sh@12 -- # build_accel_config 00:05:12.082 17:21:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.082 17:21:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.082 17:21:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.082 17:21:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.082 17:21:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.082 17:21:38 -- accel/accel.sh@40 -- # local IFS=, 00:05:12.082 17:21:38 -- accel/accel.sh@41 -- # jq -r . 00:05:12.082 [2024-04-18 17:21:38.521401] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:12.082 [2024-04-18 17:21:38.521479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905031 ] 00:05:12.082 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.082 [2024-04-18 17:21:38.582581] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.341 [2024-04-18 17:21:38.698926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.341 17:21:38 -- accel/accel.sh@20 -- # val= 00:05:12.341 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.341 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.341 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.341 17:21:38 -- accel/accel.sh@20 -- # val= 00:05:12.341 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.341 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.341 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.341 17:21:38 -- accel/accel.sh@20 -- # val=0x1 00:05:12.341 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.341 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.342 17:21:38 -- accel/accel.sh@20 -- # val= 00:05:12.342 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.342 17:21:38 -- accel/accel.sh@20 -- # val= 00:05:12.342 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.342 17:21:38 -- accel/accel.sh@20 -- # val=xor 00:05:12.342 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.342 17:21:38 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.342 17:21:38 -- accel/accel.sh@20 -- # val=2 00:05:12.342 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.342 17:21:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.342 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.342 17:21:38 -- accel/accel.sh@20 -- # val= 00:05:12.342 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.342 17:21:38 -- accel/accel.sh@20 -- # val=software 00:05:12.342 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.342 17:21:38 -- accel/accel.sh@22 -- # accel_module=software 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.342 17:21:38 -- accel/accel.sh@20 -- # val=32 00:05:12.342 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.342 17:21:38 -- accel/accel.sh@20 -- # val=32 00:05:12.342 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.342 17:21:38 -- accel/accel.sh@20 -- # val=1 00:05:12.342 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.342 17:21:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.342 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.342 17:21:38 -- accel/accel.sh@20 -- # val=Yes 00:05:12.342 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.342 17:21:38 -- accel/accel.sh@20 -- # val= 00:05:12.342 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:12.342 17:21:38 -- accel/accel.sh@20 -- # val= 00:05:12.342 17:21:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # IFS=: 00:05:12.342 17:21:38 -- accel/accel.sh@19 -- # read -r var val 00:05:13.724 17:21:39 -- accel/accel.sh@20 -- # val= 00:05:13.724 17:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.724 17:21:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.724 17:21:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.724 17:21:39 -- accel/accel.sh@20 -- # val= 00:05:13.724 17:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.724 17:21:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.724 17:21:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.724 17:21:39 -- accel/accel.sh@20 -- # val= 00:05:13.724 17:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.724 17:21:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.724 17:21:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.724 17:21:39 -- accel/accel.sh@20 -- # val= 00:05:13.724 17:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.724 17:21:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.724 17:21:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.724 17:21:39 -- accel/accel.sh@20 -- # val= 00:05:13.724 17:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.724 17:21:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.724 17:21:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.724 17:21:39 -- accel/accel.sh@20 -- # val= 00:05:13.724 17:21:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.724 17:21:39 -- accel/accel.sh@19 -- # IFS=: 00:05:13.724 17:21:39 -- accel/accel.sh@19 -- # read -r var val 00:05:13.724 17:21:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:13.724 17:21:39 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:13.724 17:21:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:13.724 00:05:13.724 real 0m1.466s 00:05:13.724 user 0m1.328s 00:05:13.724 sys 0m0.137s 00:05:13.724 17:21:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.724 17:21:39 -- common/autotest_common.sh@10 -- # set +x 00:05:13.724 ************************************ 00:05:13.724 END TEST accel_xor 00:05:13.724 ************************************ 00:05:13.724 17:21:39 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:13.724 17:21:39 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:13.724 17:21:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.724 17:21:39 -- common/autotest_common.sh@10 -- # set +x 00:05:13.724 ************************************ 00:05:13.724 START TEST accel_xor 00:05:13.724 ************************************ 00:05:13.724 17:21:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:05:13.724 17:21:40 -- accel/accel.sh@16 -- # local accel_opc 00:05:13.724 17:21:40 -- accel/accel.sh@17 -- # local accel_module 00:05:13.724 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.724 17:21:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:13.724 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.724 17:21:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:13.724 17:21:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.724 17:21:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.724 17:21:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.724 17:21:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.724 17:21:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.724 17:21:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.724 17:21:40 -- accel/accel.sh@40 -- # local IFS=, 00:05:13.724 17:21:40 -- accel/accel.sh@41 -- # jq -r . 00:05:13.724 [2024-04-18 17:21:40.104828] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:13.724 [2024-04-18 17:21:40.104888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905305 ] 00:05:13.724 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.724 [2024-04-18 17:21:40.166792] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.982 [2024-04-18 17:21:40.282895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.982 17:21:40 -- accel/accel.sh@20 -- # val= 00:05:13.982 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.982 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.982 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.982 17:21:40 -- accel/accel.sh@20 -- # val= 00:05:13.982 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.982 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.982 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.982 17:21:40 -- accel/accel.sh@20 -- # val=0x1 00:05:13.982 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.982 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.982 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.982 17:21:40 -- accel/accel.sh@20 -- # val= 00:05:13.982 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.982 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.982 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.982 17:21:40 -- accel/accel.sh@20 -- # val= 00:05:13.982 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.983 17:21:40 -- accel/accel.sh@20 -- # val=xor 00:05:13.983 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.983 17:21:40 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.983 17:21:40 -- accel/accel.sh@20 -- # val=3 00:05:13.983 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.983 17:21:40 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:13.983 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.983 17:21:40 -- accel/accel.sh@20 -- # val= 00:05:13.983 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.983 17:21:40 -- accel/accel.sh@20 -- # val=software 00:05:13.983 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.983 17:21:40 -- accel/accel.sh@22 -- # accel_module=software 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.983 17:21:40 -- accel/accel.sh@20 -- # val=32 00:05:13.983 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.983 17:21:40 -- accel/accel.sh@20 -- # val=32 00:05:13.983 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.983 17:21:40 -- accel/accel.sh@20 -- # val=1 00:05:13.983 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.983 17:21:40 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:13.983 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.983 17:21:40 -- accel/accel.sh@20 -- # val=Yes 00:05:13.983 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.983 17:21:40 -- accel/accel.sh@20 -- # val= 00:05:13.983 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:13.983 17:21:40 -- accel/accel.sh@20 -- # val= 00:05:13.983 17:21:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # IFS=: 00:05:13.983 17:21:40 -- accel/accel.sh@19 -- # read -r var val 00:05:15.364 17:21:41 -- accel/accel.sh@20 -- # val= 00:05:15.364 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.364 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.364 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.364 17:21:41 -- accel/accel.sh@20 -- # val= 00:05:15.364 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.364 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.364 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.364 17:21:41 -- accel/accel.sh@20 -- # val= 00:05:15.364 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.364 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.364 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.364 17:21:41 -- accel/accel.sh@20 -- # val= 00:05:15.364 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.364 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.364 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.364 17:21:41 -- accel/accel.sh@20 -- # val= 00:05:15.364 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.364 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.364 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.364 17:21:41 -- accel/accel.sh@20 -- # val= 00:05:15.364 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.364 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.364 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.364 17:21:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.364 17:21:41 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:15.364 17:21:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.364 00:05:15.364 real 0m1.470s 00:05:15.364 user 0m1.328s 00:05:15.364 sys 0m0.141s 00:05:15.364 17:21:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.364 17:21:41 -- common/autotest_common.sh@10 -- # set +x 00:05:15.364 ************************************ 00:05:15.364 END TEST accel_xor 00:05:15.364 ************************************ 00:05:15.364 17:21:41 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:15.364 17:21:41 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:15.364 17:21:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.364 17:21:41 -- common/autotest_common.sh@10 -- # set +x 00:05:15.364 ************************************ 00:05:15.364 START TEST accel_dif_verify 00:05:15.364 ************************************ 00:05:15.364 17:21:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:05:15.364 17:21:41 -- accel/accel.sh@16 -- # local accel_opc 00:05:15.364 17:21:41 -- accel/accel.sh@17 -- # local accel_module 00:05:15.364 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.364 17:21:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:15.364 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.364 17:21:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:15.364 17:21:41 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.364 17:21:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.364 17:21:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.364 17:21:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.364 17:21:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.364 17:21:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.364 17:21:41 -- accel/accel.sh@40 -- # local IFS=, 00:05:15.364 17:21:41 -- accel/accel.sh@41 -- # jq -r . 00:05:15.364 [2024-04-18 17:21:41.690896] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:15.364 [2024-04-18 17:21:41.690959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905479 ] 00:05:15.364 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.364 [2024-04-18 17:21:41.753564] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.364 [2024-04-18 17:21:41.869768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val= 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val= 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val=0x1 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val= 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val= 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val=dif_verify 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val= 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val=software 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@22 -- # accel_module=software 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val=32 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val=32 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val=1 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val=No 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val= 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.623 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:15.623 17:21:41 -- accel/accel.sh@20 -- # val= 00:05:15.623 17:21:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:15.624 17:21:41 -- accel/accel.sh@19 -- # IFS=: 00:05:15.624 17:21:41 -- accel/accel.sh@19 -- # read -r var val 00:05:17.007 17:21:43 -- accel/accel.sh@20 -- # val= 00:05:17.007 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.007 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.007 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.007 17:21:43 -- accel/accel.sh@20 -- # val= 00:05:17.007 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.007 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.007 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.007 17:21:43 -- accel/accel.sh@20 -- # val= 00:05:17.007 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.007 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.007 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.007 17:21:43 -- accel/accel.sh@20 -- # val= 00:05:17.007 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.007 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.007 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.007 17:21:43 -- accel/accel.sh@20 -- # val= 00:05:17.007 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.007 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.007 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.007 17:21:43 -- accel/accel.sh@20 -- # val= 00:05:17.007 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.007 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.007 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.007 17:21:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:17.007 17:21:43 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:17.007 17:21:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.007 00:05:17.007 real 0m1.484s 00:05:17.007 user 0m1.335s 00:05:17.007 sys 0m0.151s 00:05:17.007 17:21:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.007 17:21:43 -- common/autotest_common.sh@10 -- # set +x 00:05:17.007 ************************************ 00:05:17.007 END TEST accel_dif_verify 00:05:17.007 ************************************ 00:05:17.007 17:21:43 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:17.007 17:21:43 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:17.007 17:21:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.007 17:21:43 -- common/autotest_common.sh@10 -- # set +x 00:05:17.007 ************************************ 00:05:17.007 START TEST accel_dif_generate 00:05:17.007 ************************************ 00:05:17.007 17:21:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:05:17.007 17:21:43 -- accel/accel.sh@16 -- # local accel_opc 00:05:17.007 17:21:43 -- accel/accel.sh@17 -- # local accel_module 00:05:17.007 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.007 17:21:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:17.007 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.007 17:21:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:17.007 17:21:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:17.007 17:21:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.007 17:21:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.007 17:21:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.007 17:21:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.007 17:21:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.007 17:21:43 -- accel/accel.sh@40 -- # local IFS=, 00:05:17.007 17:21:43 -- accel/accel.sh@41 -- # jq -r . 00:05:17.007 [2024-04-18 17:21:43.301631] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:17.007 [2024-04-18 17:21:43.301707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905658 ] 00:05:17.007 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.007 [2024-04-18 17:21:43.365059] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.007 [2024-04-18 17:21:43.484816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val= 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val= 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val=0x1 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val= 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val= 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val=dif_generate 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val= 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val=software 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@22 -- # accel_module=software 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val=32 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val=32 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val=1 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val=No 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val= 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:17.268 17:21:43 -- accel/accel.sh@20 -- # val= 00:05:17.268 17:21:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # IFS=: 00:05:17.268 17:21:43 -- accel/accel.sh@19 -- # read -r var val 00:05:18.649 17:21:44 -- accel/accel.sh@20 -- # val= 00:05:18.649 17:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.649 17:21:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.649 17:21:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.649 17:21:44 -- accel/accel.sh@20 -- # val= 00:05:18.649 17:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.649 17:21:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.649 17:21:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.649 17:21:44 -- accel/accel.sh@20 -- # val= 00:05:18.649 17:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.649 17:21:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.649 17:21:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.649 17:21:44 -- accel/accel.sh@20 -- # val= 00:05:18.649 17:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.649 17:21:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.649 17:21:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.649 17:21:44 -- accel/accel.sh@20 -- # val= 00:05:18.649 17:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.649 17:21:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.649 17:21:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.649 17:21:44 -- accel/accel.sh@20 -- # val= 00:05:18.649 17:21:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.649 17:21:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.649 17:21:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.649 17:21:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.649 17:21:44 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:18.649 17:21:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.649 00:05:18.649 real 0m1.484s 00:05:18.649 user 0m1.343s 00:05:18.649 sys 0m0.144s 00:05:18.649 17:21:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.649 17:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:18.649 ************************************ 00:05:18.649 END TEST accel_dif_generate 00:05:18.649 ************************************ 00:05:18.649 17:21:44 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:18.649 17:21:44 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:18.649 17:21:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.649 17:21:44 -- common/autotest_common.sh@10 -- # set +x 00:05:18.649 ************************************ 00:05:18.649 START TEST accel_dif_generate_copy 00:05:18.649 ************************************ 00:05:18.649 17:21:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:05:18.649 17:21:44 -- accel/accel.sh@16 -- # local accel_opc 00:05:18.649 17:21:44 -- accel/accel.sh@17 -- # local accel_module 00:05:18.649 17:21:44 -- accel/accel.sh@19 -- # IFS=: 00:05:18.649 17:21:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:18.649 17:21:44 -- accel/accel.sh@19 -- # read -r var val 00:05:18.649 17:21:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:18.649 17:21:44 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.649 17:21:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.649 17:21:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.649 17:21:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.649 17:21:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.649 17:21:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.649 17:21:44 -- accel/accel.sh@40 -- # local IFS=, 00:05:18.649 17:21:44 -- accel/accel.sh@41 -- # jq -r . 00:05:18.649 [2024-04-18 17:21:44.907937] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:18.649 [2024-04-18 17:21:44.908001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905922 ] 00:05:18.649 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.649 [2024-04-18 17:21:44.969885] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.649 [2024-04-18 17:21:45.092547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.649 17:21:45 -- accel/accel.sh@20 -- # val= 00:05:18.649 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:18.649 17:21:45 -- accel/accel.sh@20 -- # val= 00:05:18.649 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:18.649 17:21:45 -- accel/accel.sh@20 -- # val=0x1 00:05:18.649 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:18.649 17:21:45 -- accel/accel.sh@20 -- # val= 00:05:18.649 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:18.649 17:21:45 -- accel/accel.sh@20 -- # val= 00:05:18.649 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:18.649 17:21:45 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:18.649 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.649 17:21:45 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:18.649 17:21:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.649 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:18.649 17:21:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.649 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:18.649 17:21:45 -- accel/accel.sh@20 -- # val= 00:05:18.649 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.649 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:18.650 17:21:45 -- accel/accel.sh@20 -- # val=software 00:05:18.650 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.650 17:21:45 -- accel/accel.sh@22 -- # accel_module=software 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:18.650 17:21:45 -- accel/accel.sh@20 -- # val=32 00:05:18.650 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:18.650 17:21:45 -- accel/accel.sh@20 -- # val=32 00:05:18.650 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:18.650 17:21:45 -- accel/accel.sh@20 -- # val=1 00:05:18.650 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:18.650 17:21:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.650 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:18.650 17:21:45 -- accel/accel.sh@20 -- # val=No 00:05:18.650 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:18.650 17:21:45 -- accel/accel.sh@20 -- # val= 00:05:18.650 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:18.650 17:21:45 -- accel/accel.sh@20 -- # val= 00:05:18.650 17:21:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # IFS=: 00:05:18.650 17:21:45 -- accel/accel.sh@19 -- # read -r var val 00:05:20.032 17:21:46 -- accel/accel.sh@20 -- # val= 00:05:20.033 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.033 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.033 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.033 17:21:46 -- accel/accel.sh@20 -- # val= 00:05:20.033 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.033 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.033 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.033 17:21:46 -- accel/accel.sh@20 -- # val= 00:05:20.033 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.033 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.033 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.033 17:21:46 -- accel/accel.sh@20 -- # val= 00:05:20.033 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.033 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.033 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.033 17:21:46 -- accel/accel.sh@20 -- # val= 00:05:20.033 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.033 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.033 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.033 17:21:46 -- accel/accel.sh@20 -- # val= 00:05:20.033 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.033 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.033 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.033 17:21:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.033 17:21:46 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:20.033 17:21:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.033 00:05:20.033 real 0m1.484s 00:05:20.033 user 0m1.338s 00:05:20.033 sys 0m0.146s 00:05:20.033 17:21:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.033 17:21:46 -- common/autotest_common.sh@10 -- # set +x 00:05:20.033 ************************************ 00:05:20.033 END TEST accel_dif_generate_copy 00:05:20.033 ************************************ 00:05:20.033 17:21:46 -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:20.033 17:21:46 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:20.033 17:21:46 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:20.033 17:21:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.033 17:21:46 -- common/autotest_common.sh@10 -- # set +x 00:05:20.033 ************************************ 00:05:20.033 START TEST accel_comp 00:05:20.033 ************************************ 00:05:20.033 17:21:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:20.033 17:21:46 -- accel/accel.sh@16 -- # local accel_opc 00:05:20.033 17:21:46 -- accel/accel.sh@17 -- # local accel_module 00:05:20.033 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.033 17:21:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:20.033 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.033 17:21:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:20.033 17:21:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.033 17:21:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.033 17:21:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.033 17:21:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.033 17:21:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.033 17:21:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.033 17:21:46 -- accel/accel.sh@40 -- # local IFS=, 00:05:20.033 17:21:46 -- accel/accel.sh@41 -- # jq -r . 00:05:20.033 [2024-04-18 17:21:46.513475] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:20.033 [2024-04-18 17:21:46.513545] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906093 ] 00:05:20.033 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.293 [2024-04-18 17:21:46.580117] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.293 [2024-04-18 17:21:46.700843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val= 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val= 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val= 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val=0x1 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val= 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val= 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val=compress 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@23 -- # accel_opc=compress 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val= 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val=software 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@22 -- # accel_module=software 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val=32 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val=32 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val=1 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val=No 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val= 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:20.293 17:21:46 -- accel/accel.sh@20 -- # val= 00:05:20.293 17:21:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # IFS=: 00:05:20.293 17:21:46 -- accel/accel.sh@19 -- # read -r var val 00:05:21.673 17:21:47 -- accel/accel.sh@20 -- # val= 00:05:21.673 17:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.673 17:21:47 -- accel/accel.sh@19 -- # IFS=: 00:05:21.673 17:21:47 -- accel/accel.sh@19 -- # read -r var val 00:05:21.673 17:21:47 -- accel/accel.sh@20 -- # val= 00:05:21.673 17:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.673 17:21:47 -- accel/accel.sh@19 -- # IFS=: 00:05:21.673 17:21:47 -- accel/accel.sh@19 -- # read -r var val 00:05:21.673 17:21:47 -- accel/accel.sh@20 -- # val= 00:05:21.673 17:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.673 17:21:47 -- accel/accel.sh@19 -- # IFS=: 00:05:21.673 17:21:47 -- accel/accel.sh@19 -- # read -r var val 00:05:21.673 17:21:47 -- accel/accel.sh@20 -- # val= 00:05:21.673 17:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.673 17:21:47 -- accel/accel.sh@19 -- # IFS=: 00:05:21.673 17:21:47 -- accel/accel.sh@19 -- # read -r var val 00:05:21.673 17:21:47 -- accel/accel.sh@20 -- # val= 00:05:21.673 17:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.673 17:21:47 -- accel/accel.sh@19 -- # IFS=: 00:05:21.673 17:21:47 -- accel/accel.sh@19 -- # read -r var val 00:05:21.673 17:21:47 -- accel/accel.sh@20 -- # val= 00:05:21.673 17:21:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.673 17:21:47 -- accel/accel.sh@19 -- # IFS=: 00:05:21.673 17:21:47 -- accel/accel.sh@19 -- # read -r var val 00:05:21.673 17:21:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.673 17:21:47 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:21.673 17:21:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.673 00:05:21.673 real 0m1.490s 00:05:21.673 user 0m1.342s 00:05:21.673 sys 0m0.149s 00:05:21.673 17:21:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:21.673 17:21:47 -- common/autotest_common.sh@10 -- # set +x 00:05:21.673 ************************************ 00:05:21.673 END TEST accel_comp 00:05:21.673 ************************************ 00:05:21.673 17:21:48 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:21.673 17:21:48 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:21.673 17:21:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.673 17:21:48 -- common/autotest_common.sh@10 -- # set +x 00:05:21.673 ************************************ 00:05:21.673 START TEST accel_decomp 00:05:21.673 ************************************ 00:05:21.674 17:21:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:21.674 17:21:48 -- accel/accel.sh@16 -- # local accel_opc 00:05:21.674 17:21:48 -- accel/accel.sh@17 -- # local accel_module 00:05:21.674 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.674 17:21:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:21.674 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.674 17:21:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:05:21.674 17:21:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.674 17:21:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.674 17:21:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.674 17:21:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.674 17:21:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.674 17:21:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.674 17:21:48 -- accel/accel.sh@40 -- # local IFS=, 00:05:21.674 17:21:48 -- accel/accel.sh@41 -- # jq -r . 00:05:21.674 [2024-04-18 17:21:48.128169] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:21.674 [2024-04-18 17:21:48.128236] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906365 ] 00:05:21.674 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.674 [2024-04-18 17:21:48.196026] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.934 [2024-04-18 17:21:48.316749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val= 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val= 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val= 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val=0x1 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val= 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val= 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val=decompress 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val= 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val=software 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@22 -- # accel_module=software 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val=32 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val=32 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val=1 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val=Yes 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val= 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 17:21:48 -- accel/accel.sh@20 -- # val= 00:05:21.935 17:21:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 17:21:48 -- accel/accel.sh@19 -- # read -r var val 00:05:23.313 17:21:49 -- accel/accel.sh@20 -- # val= 00:05:23.313 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.313 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.313 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.313 17:21:49 -- accel/accel.sh@20 -- # val= 00:05:23.313 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.313 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.313 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.313 17:21:49 -- accel/accel.sh@20 -- # val= 00:05:23.313 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.313 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.313 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.313 17:21:49 -- accel/accel.sh@20 -- # val= 00:05:23.313 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.313 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.313 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.313 17:21:49 -- accel/accel.sh@20 -- # val= 00:05:23.313 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.313 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.313 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.313 17:21:49 -- accel/accel.sh@20 -- # val= 00:05:23.313 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.313 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.313 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.313 17:21:49 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.313 17:21:49 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:23.313 17:21:49 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.313 00:05:23.313 real 0m1.487s 00:05:23.313 user 0m1.338s 00:05:23.313 sys 0m0.151s 00:05:23.313 17:21:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.313 17:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:23.313 ************************************ 00:05:23.313 END TEST accel_decomp 00:05:23.313 ************************************ 00:05:23.313 17:21:49 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:23.313 17:21:49 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:23.313 17:21:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.313 17:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:23.313 ************************************ 00:05:23.313 START TEST accel_decmop_full 00:05:23.313 ************************************ 00:05:23.313 17:21:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:23.313 17:21:49 -- accel/accel.sh@16 -- # local accel_opc 00:05:23.313 17:21:49 -- accel/accel.sh@17 -- # local accel_module 00:05:23.313 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.313 17:21:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:23.313 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.313 17:21:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:23.313 17:21:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:23.313 17:21:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.313 17:21:49 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.313 17:21:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.313 17:21:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.313 17:21:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.313 17:21:49 -- accel/accel.sh@40 -- # local IFS=, 00:05:23.313 17:21:49 -- accel/accel.sh@41 -- # jq -r . 00:05:23.313 [2024-04-18 17:21:49.737269] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:23.313 [2024-04-18 17:21:49.737335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906537 ] 00:05:23.313 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.313 [2024-04-18 17:21:49.795896] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.575 [2024-04-18 17:21:49.906611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val= 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val= 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val= 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val=0x1 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val= 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val= 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val=decompress 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val= 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val=software 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@22 -- # accel_module=software 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val=32 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val=32 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val=1 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val=Yes 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val= 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:23.575 17:21:49 -- accel/accel.sh@20 -- # val= 00:05:23.575 17:21:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # IFS=: 00:05:23.575 17:21:49 -- accel/accel.sh@19 -- # read -r var val 00:05:25.010 17:21:51 -- accel/accel.sh@20 -- # val= 00:05:25.010 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.010 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.010 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.010 17:21:51 -- accel/accel.sh@20 -- # val= 00:05:25.010 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.010 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.010 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.010 17:21:51 -- accel/accel.sh@20 -- # val= 00:05:25.010 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.010 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.010 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.010 17:21:51 -- accel/accel.sh@20 -- # val= 00:05:25.010 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.010 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.010 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.010 17:21:51 -- accel/accel.sh@20 -- # val= 00:05:25.010 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.010 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.010 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.010 17:21:51 -- accel/accel.sh@20 -- # val= 00:05:25.010 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.010 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.010 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.010 17:21:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.010 17:21:51 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:25.010 17:21:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.010 00:05:25.010 real 0m1.475s 00:05:25.010 user 0m1.335s 00:05:25.010 sys 0m0.141s 00:05:25.010 17:21:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:25.010 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:25.010 ************************************ 00:05:25.010 END TEST accel_decmop_full 00:05:25.010 ************************************ 00:05:25.010 17:21:51 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:25.010 17:21:51 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:25.010 17:21:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.010 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:25.010 ************************************ 00:05:25.010 START TEST accel_decomp_mcore 00:05:25.010 ************************************ 00:05:25.010 17:21:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:25.010 17:21:51 -- accel/accel.sh@16 -- # local accel_opc 00:05:25.010 17:21:51 -- accel/accel.sh@17 -- # local accel_module 00:05:25.010 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.010 17:21:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:25.010 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.010 17:21:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:25.010 17:21:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.010 17:21:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.010 17:21:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.010 17:21:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.010 17:21:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.010 17:21:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.010 17:21:51 -- accel/accel.sh@40 -- # local IFS=, 00:05:25.010 17:21:51 -- accel/accel.sh@41 -- # jq -r . 00:05:25.010 [2024-04-18 17:21:51.337064] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:25.010 [2024-04-18 17:21:51.337129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906782 ] 00:05:25.010 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.010 [2024-04-18 17:21:51.403229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.010 [2024-04-18 17:21:51.527104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.010 [2024-04-18 17:21:51.527162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.010 [2024-04-18 17:21:51.527214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.010 [2024-04-18 17:21:51.527218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val= 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val= 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val= 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val=0xf 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val= 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val= 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val=decompress 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val= 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val=software 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@22 -- # accel_module=software 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val=32 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val=32 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val=1 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val=Yes 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val= 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:25.269 17:21:51 -- accel/accel.sh@20 -- # val= 00:05:25.269 17:21:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # IFS=: 00:05:25.269 17:21:51 -- accel/accel.sh@19 -- # read -r var val 00:05:26.648 17:21:52 -- accel/accel.sh@20 -- # val= 00:05:26.648 17:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # IFS=: 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # read -r var val 00:05:26.648 17:21:52 -- accel/accel.sh@20 -- # val= 00:05:26.648 17:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # IFS=: 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # read -r var val 00:05:26.648 17:21:52 -- accel/accel.sh@20 -- # val= 00:05:26.648 17:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # IFS=: 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # read -r var val 00:05:26.648 17:21:52 -- accel/accel.sh@20 -- # val= 00:05:26.648 17:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # IFS=: 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # read -r var val 00:05:26.648 17:21:52 -- accel/accel.sh@20 -- # val= 00:05:26.648 17:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # IFS=: 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # read -r var val 00:05:26.648 17:21:52 -- accel/accel.sh@20 -- # val= 00:05:26.648 17:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # IFS=: 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # read -r var val 00:05:26.648 17:21:52 -- accel/accel.sh@20 -- # val= 00:05:26.648 17:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # IFS=: 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # read -r var val 00:05:26.648 17:21:52 -- accel/accel.sh@20 -- # val= 00:05:26.648 17:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # IFS=: 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # read -r var val 00:05:26.648 17:21:52 -- accel/accel.sh@20 -- # val= 00:05:26.648 17:21:52 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # IFS=: 00:05:26.648 17:21:52 -- accel/accel.sh@19 -- # read -r var val 00:05:26.648 17:21:52 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.648 17:21:52 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:26.648 17:21:52 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.648 00:05:26.648 real 0m1.501s 00:05:26.648 user 0m4.810s 00:05:26.648 sys 0m0.158s 00:05:26.648 17:21:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:26.648 17:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:26.648 ************************************ 00:05:26.648 END TEST accel_decomp_mcore 00:05:26.648 ************************************ 00:05:26.648 17:21:52 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:26.648 17:21:52 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:26.648 17:21:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.648 17:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:26.648 ************************************ 00:05:26.649 START TEST accel_decomp_full_mcore 00:05:26.649 ************************************ 00:05:26.649 17:21:52 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:26.649 17:21:52 -- accel/accel.sh@16 -- # local accel_opc 00:05:26.649 17:21:52 -- accel/accel.sh@17 -- # local accel_module 00:05:26.649 17:21:52 -- accel/accel.sh@19 -- # IFS=: 00:05:26.649 17:21:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:26.649 17:21:52 -- accel/accel.sh@19 -- # read -r var val 00:05:26.649 17:21:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:26.649 17:21:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.649 17:21:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.649 17:21:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.649 17:21:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.649 17:21:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.649 17:21:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.649 17:21:52 -- accel/accel.sh@40 -- # local IFS=, 00:05:26.649 17:21:52 -- accel/accel.sh@41 -- # jq -r . 00:05:26.649 [2024-04-18 17:21:52.967941] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:26.649 [2024-04-18 17:21:52.968005] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906989 ] 00:05:26.649 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.649 [2024-04-18 17:21:53.030251] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.649 [2024-04-18 17:21:53.154221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.649 [2024-04-18 17:21:53.154278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.649 [2024-04-18 17:21:53.154332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.649 [2024-04-18 17:21:53.154336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val= 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val= 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val= 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val=0xf 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val= 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val= 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val=decompress 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val= 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val=software 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@22 -- # accel_module=software 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val=32 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val=32 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val=1 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val=Yes 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val= 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:26.908 17:21:53 -- accel/accel.sh@20 -- # val= 00:05:26.908 17:21:53 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # IFS=: 00:05:26.908 17:21:53 -- accel/accel.sh@19 -- # read -r var val 00:05:28.288 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.288 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.288 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.288 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.288 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.288 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.288 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.288 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.288 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.288 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.288 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.288 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.288 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.288 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.288 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.288 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.288 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.288 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.288 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.288 17:21:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.288 17:21:54 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:28.288 17:21:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.288 00:05:28.288 real 0m1.502s 00:05:28.288 user 0m4.833s 00:05:28.288 sys 0m0.161s 00:05:28.288 17:21:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:28.288 17:21:54 -- common/autotest_common.sh@10 -- # set +x 00:05:28.288 ************************************ 00:05:28.288 END TEST accel_decomp_full_mcore 00:05:28.288 ************************************ 00:05:28.288 17:21:54 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:28.288 17:21:54 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:28.288 17:21:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.288 17:21:54 -- common/autotest_common.sh@10 -- # set +x 00:05:28.288 ************************************ 00:05:28.288 START TEST accel_decomp_mthread 00:05:28.288 ************************************ 00:05:28.288 17:21:54 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:28.289 17:21:54 -- accel/accel.sh@16 -- # local accel_opc 00:05:28.289 17:21:54 -- accel/accel.sh@17 -- # local accel_module 00:05:28.289 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.289 17:21:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:28.289 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.289 17:21:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:28.289 17:21:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:28.289 17:21:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.289 17:21:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.289 17:21:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.289 17:21:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.289 17:21:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.289 17:21:54 -- accel/accel.sh@40 -- # local IFS=, 00:05:28.289 17:21:54 -- accel/accel.sh@41 -- # jq -r . 00:05:28.289 [2024-04-18 17:21:54.594417] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:28.289 [2024-04-18 17:21:54.594492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1907160 ] 00:05:28.289 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.289 [2024-04-18 17:21:54.656701] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.289 [2024-04-18 17:21:54.778927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.549 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.549 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.549 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.549 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.549 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.549 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.549 17:21:54 -- accel/accel.sh@20 -- # val=0x1 00:05:28.549 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.549 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.549 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.549 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.549 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.549 17:21:54 -- accel/accel.sh@20 -- # val=decompress 00:05:28.549 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.549 17:21:54 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.549 17:21:54 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.549 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.549 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.549 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.549 17:21:54 -- accel/accel.sh@20 -- # val=software 00:05:28.549 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.549 17:21:54 -- accel/accel.sh@22 -- # accel_module=software 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.549 17:21:54 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:28.549 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.549 17:21:54 -- accel/accel.sh@20 -- # val=32 00:05:28.549 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.549 17:21:54 -- accel/accel.sh@20 -- # val=32 00:05:28.549 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.549 17:21:54 -- accel/accel.sh@20 -- # val=2 00:05:28.549 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.549 17:21:54 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.549 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.549 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.549 17:21:54 -- accel/accel.sh@20 -- # val=Yes 00:05:28.549 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.550 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.550 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.550 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.550 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.550 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.550 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:28.550 17:21:54 -- accel/accel.sh@20 -- # val= 00:05:28.550 17:21:54 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.550 17:21:54 -- accel/accel.sh@19 -- # IFS=: 00:05:28.550 17:21:54 -- accel/accel.sh@19 -- # read -r var val 00:05:29.929 17:21:56 -- accel/accel.sh@20 -- # val= 00:05:29.929 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:29.929 17:21:56 -- accel/accel.sh@20 -- # val= 00:05:29.929 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:29.929 17:21:56 -- accel/accel.sh@20 -- # val= 00:05:29.929 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:29.929 17:21:56 -- accel/accel.sh@20 -- # val= 00:05:29.929 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:29.929 17:21:56 -- accel/accel.sh@20 -- # val= 00:05:29.929 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:29.929 17:21:56 -- accel/accel.sh@20 -- # val= 00:05:29.929 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:29.929 17:21:56 -- accel/accel.sh@20 -- # val= 00:05:29.929 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:29.929 17:21:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.929 17:21:56 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:29.929 17:21:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.929 00:05:29.929 real 0m1.495s 00:05:29.929 user 0m1.352s 00:05:29.929 sys 0m0.145s 00:05:29.929 17:21:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:29.929 17:21:56 -- common/autotest_common.sh@10 -- # set +x 00:05:29.929 ************************************ 00:05:29.929 END TEST accel_decomp_mthread 00:05:29.929 ************************************ 00:05:29.929 17:21:56 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:29.929 17:21:56 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:29.929 17:21:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.929 17:21:56 -- common/autotest_common.sh@10 -- # set +x 00:05:29.929 ************************************ 00:05:29.929 START TEST accel_deomp_full_mthread 00:05:29.929 ************************************ 00:05:29.929 17:21:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:29.929 17:21:56 -- accel/accel.sh@16 -- # local accel_opc 00:05:29.929 17:21:56 -- accel/accel.sh@17 -- # local accel_module 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:29.929 17:21:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:29.929 17:21:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:29.929 17:21:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:29.929 17:21:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.929 17:21:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.929 17:21:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.929 17:21:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.929 17:21:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.929 17:21:56 -- accel/accel.sh@40 -- # local IFS=, 00:05:29.929 17:21:56 -- accel/accel.sh@41 -- # jq -r . 00:05:29.929 [2024-04-18 17:21:56.218222] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:29.929 [2024-04-18 17:21:56.218281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1907435 ] 00:05:29.929 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.929 [2024-04-18 17:21:56.282648] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.929 [2024-04-18 17:21:56.400454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.929 17:21:56 -- accel/accel.sh@20 -- # val= 00:05:29.929 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:29.929 17:21:56 -- accel/accel.sh@20 -- # val= 00:05:29.929 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:29.929 17:21:56 -- accel/accel.sh@20 -- # val= 00:05:29.929 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:29.929 17:21:56 -- accel/accel.sh@20 -- # val=0x1 00:05:29.929 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:29.929 17:21:56 -- accel/accel.sh@20 -- # val= 00:05:29.929 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:29.929 17:21:56 -- accel/accel.sh@20 -- # val= 00:05:29.929 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:29.929 17:21:56 -- accel/accel.sh@20 -- # val=decompress 00:05:29.929 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.929 17:21:56 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:29.929 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:30.191 17:21:56 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:30.191 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:30.191 17:21:56 -- accel/accel.sh@20 -- # val= 00:05:30.191 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:30.191 17:21:56 -- accel/accel.sh@20 -- # val=software 00:05:30.191 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.191 17:21:56 -- accel/accel.sh@22 -- # accel_module=software 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:30.191 17:21:56 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:05:30.191 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:30.191 17:21:56 -- accel/accel.sh@20 -- # val=32 00:05:30.191 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:30.191 17:21:56 -- accel/accel.sh@20 -- # val=32 00:05:30.191 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:30.191 17:21:56 -- accel/accel.sh@20 -- # val=2 00:05:30.191 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:30.191 17:21:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.191 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:30.191 17:21:56 -- accel/accel.sh@20 -- # val=Yes 00:05:30.191 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:30.191 17:21:56 -- accel/accel.sh@20 -- # val= 00:05:30.191 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:30.191 17:21:56 -- accel/accel.sh@20 -- # val= 00:05:30.191 17:21:56 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # IFS=: 00:05:30.191 17:21:56 -- accel/accel.sh@19 -- # read -r var val 00:05:31.570 17:21:57 -- accel/accel.sh@20 -- # val= 00:05:31.570 17:21:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.570 17:21:57 -- accel/accel.sh@19 -- # IFS=: 00:05:31.570 17:21:57 -- accel/accel.sh@19 -- # read -r var val 00:05:31.570 17:21:57 -- accel/accel.sh@20 -- # val= 00:05:31.570 17:21:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.570 17:21:57 -- accel/accel.sh@19 -- # IFS=: 00:05:31.570 17:21:57 -- accel/accel.sh@19 -- # read -r var val 00:05:31.570 17:21:57 -- accel/accel.sh@20 -- # val= 00:05:31.570 17:21:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.570 17:21:57 -- accel/accel.sh@19 -- # IFS=: 00:05:31.570 17:21:57 -- accel/accel.sh@19 -- # read -r var val 00:05:31.570 17:21:57 -- accel/accel.sh@20 -- # val= 00:05:31.570 17:21:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.570 17:21:57 -- accel/accel.sh@19 -- # IFS=: 00:05:31.570 17:21:57 -- accel/accel.sh@19 -- # read -r var val 00:05:31.570 17:21:57 -- accel/accel.sh@20 -- # val= 00:05:31.570 17:21:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.570 17:21:57 -- accel/accel.sh@19 -- # IFS=: 00:05:31.570 17:21:57 -- accel/accel.sh@19 -- # read -r var val 00:05:31.570 17:21:57 -- accel/accel.sh@20 -- # val= 00:05:31.570 17:21:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.570 17:21:57 -- accel/accel.sh@19 -- # IFS=: 00:05:31.570 17:21:57 -- accel/accel.sh@19 -- # read -r var val 00:05:31.570 17:21:57 -- accel/accel.sh@20 -- # val= 00:05:31.570 17:21:57 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.570 17:21:57 -- accel/accel.sh@19 -- # IFS=: 00:05:31.570 17:21:57 -- accel/accel.sh@19 -- # read -r var val 00:05:31.570 17:21:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.570 17:21:57 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:31.570 17:21:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.570 00:05:31.570 real 0m1.521s 00:05:31.570 user 0m1.374s 00:05:31.570 sys 0m0.149s 00:05:31.570 17:21:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.570 17:21:57 -- common/autotest_common.sh@10 -- # set +x 00:05:31.570 ************************************ 00:05:31.570 END TEST accel_deomp_full_mthread 00:05:31.570 ************************************ 00:05:31.570 17:21:57 -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:31.570 17:21:57 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:31.570 17:21:57 -- accel/accel.sh@137 -- # build_accel_config 00:05:31.570 17:21:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:31.570 17:21:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.570 17:21:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.570 17:21:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.570 17:21:57 -- common/autotest_common.sh@10 -- # set +x 00:05:31.570 17:21:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.570 17:21:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.570 17:21:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.570 17:21:57 -- accel/accel.sh@40 -- # local IFS=, 00:05:31.570 17:21:57 -- accel/accel.sh@41 -- # jq -r . 00:05:31.570 ************************************ 00:05:31.570 START TEST accel_dif_functional_tests 00:05:31.570 ************************************ 00:05:31.570 17:21:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:31.570 [2024-04-18 17:21:57.887178] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:31.570 [2024-04-18 17:21:57.887252] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1907604 ] 00:05:31.570 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.570 [2024-04-18 17:21:57.954229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.570 [2024-04-18 17:21:58.080649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.570 [2024-04-18 17:21:58.082404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.570 [2024-04-18 17:21:58.082418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.829 00:05:31.829 00:05:31.829 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.829 http://cunit.sourceforge.net/ 00:05:31.829 00:05:31.829 00:05:31.829 Suite: accel_dif 00:05:31.829 Test: verify: DIF generated, GUARD check ...passed 00:05:31.829 Test: verify: DIF generated, APPTAG check ...passed 00:05:31.829 Test: verify: DIF generated, REFTAG check ...passed 00:05:31.829 Test: verify: DIF not generated, GUARD check ...[2024-04-18 17:21:58.182732] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:31.829 [2024-04-18 17:21:58.182801] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:31.829 passed 00:05:31.829 Test: verify: DIF not generated, APPTAG check ...[2024-04-18 17:21:58.182848] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:31.829 [2024-04-18 17:21:58.182883] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:31.829 passed 00:05:31.829 Test: verify: DIF not generated, REFTAG check ...[2024-04-18 17:21:58.182918] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:31.829 [2024-04-18 17:21:58.182956] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:31.829 passed 00:05:31.829 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:31.829 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-18 17:21:58.183029] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:31.829 passed 00:05:31.829 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:31.829 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:31.829 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:31.829 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-18 17:21:58.183185] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:31.829 passed 00:05:31.829 Test: generate copy: DIF generated, GUARD check ...passed 00:05:31.829 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:31.829 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:31.829 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:31.829 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:31.829 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:31.829 Test: generate copy: iovecs-len validate ...[2024-04-18 17:21:58.183457] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:31.829 passed 00:05:31.829 Test: generate copy: buffer alignment validate ...passed 00:05:31.829 00:05:31.830 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.830 suites 1 1 n/a 0 0 00:05:31.830 tests 20 20 20 0 0 00:05:31.830 asserts 204 204 204 0 n/a 00:05:31.830 00:05:31.830 Elapsed time = 0.003 seconds 00:05:32.089 00:05:32.089 real 0m0.606s 00:05:32.089 user 0m0.875s 00:05:32.089 sys 0m0.179s 00:05:32.089 17:21:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:32.089 17:21:58 -- common/autotest_common.sh@10 -- # set +x 00:05:32.089 ************************************ 00:05:32.089 END TEST accel_dif_functional_tests 00:05:32.089 ************************************ 00:05:32.089 00:05:32.089 real 0m35.952s 00:05:32.089 user 0m38.262s 00:05:32.089 sys 0m5.593s 00:05:32.089 17:21:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:32.089 17:21:58 -- common/autotest_common.sh@10 -- # set +x 00:05:32.089 ************************************ 00:05:32.089 END TEST accel 00:05:32.089 ************************************ 00:05:32.089 17:21:58 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:32.089 17:21:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.089 17:21:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.089 17:21:58 -- common/autotest_common.sh@10 -- # set +x 00:05:32.089 ************************************ 00:05:32.089 START TEST accel_rpc 00:05:32.089 ************************************ 00:05:32.089 17:21:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:32.350 * Looking for test storage... 00:05:32.350 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:05:32.350 17:21:58 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:32.350 17:21:58 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1907804 00:05:32.350 17:21:58 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:32.350 17:21:58 -- accel/accel_rpc.sh@15 -- # waitforlisten 1907804 00:05:32.350 17:21:58 -- common/autotest_common.sh@817 -- # '[' -z 1907804 ']' 00:05:32.350 17:21:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.350 17:21:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:32.350 17:21:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.350 17:21:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:32.350 17:21:58 -- common/autotest_common.sh@10 -- # set +x 00:05:32.350 [2024-04-18 17:21:58.700319] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:32.350 [2024-04-18 17:21:58.700427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1907804 ] 00:05:32.350 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.350 [2024-04-18 17:21:58.756790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.350 [2024-04-18 17:21:58.860283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.610 17:21:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.610 17:21:58 -- common/autotest_common.sh@850 -- # return 0 00:05:32.610 17:21:58 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:32.610 17:21:58 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:32.610 17:21:58 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:32.610 17:21:58 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:32.610 17:21:58 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:32.610 17:21:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.610 17:21:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.610 17:21:58 -- common/autotest_common.sh@10 -- # set +x 00:05:32.610 ************************************ 00:05:32.610 START TEST accel_assign_opcode 00:05:32.610 ************************************ 00:05:32.610 17:21:58 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:05:32.610 17:21:58 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:32.610 17:21:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:32.610 17:21:58 -- common/autotest_common.sh@10 -- # set +x 00:05:32.610 [2024-04-18 17:21:58.997116] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:32.610 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:32.610 17:21:59 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:32.610 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:32.610 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:05:32.610 [2024-04-18 17:21:59.005124] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:32.610 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:32.610 17:21:59 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:32.610 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:32.610 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:05:32.870 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:32.870 17:21:59 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:32.870 17:21:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:32.870 17:21:59 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:32.870 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:05:32.870 17:21:59 -- accel/accel_rpc.sh@42 -- # grep software 00:05:32.870 17:21:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:32.870 software 00:05:32.870 00:05:32.870 real 0m0.295s 00:05:32.870 user 0m0.038s 00:05:32.870 sys 0m0.004s 00:05:32.870 17:21:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:32.870 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:05:32.870 ************************************ 00:05:32.870 END TEST accel_assign_opcode 00:05:32.870 ************************************ 00:05:32.870 17:21:59 -- accel/accel_rpc.sh@55 -- # killprocess 1907804 00:05:32.870 17:21:59 -- common/autotest_common.sh@936 -- # '[' -z 1907804 ']' 00:05:32.870 17:21:59 -- common/autotest_common.sh@940 -- # kill -0 1907804 00:05:32.870 17:21:59 -- common/autotest_common.sh@941 -- # uname 00:05:32.870 17:21:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:32.870 17:21:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1907804 00:05:32.870 17:21:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:32.870 17:21:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:32.870 17:21:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1907804' 00:05:32.870 killing process with pid 1907804 00:05:32.870 17:21:59 -- common/autotest_common.sh@955 -- # kill 1907804 00:05:32.870 17:21:59 -- common/autotest_common.sh@960 -- # wait 1907804 00:05:33.440 00:05:33.440 real 0m1.212s 00:05:33.440 user 0m1.164s 00:05:33.440 sys 0m0.446s 00:05:33.440 17:21:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.440 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:05:33.440 ************************************ 00:05:33.440 END TEST accel_rpc 00:05:33.440 ************************************ 00:05:33.440 17:21:59 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:33.440 17:21:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.440 17:21:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.440 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:05:33.440 ************************************ 00:05:33.440 START TEST app_cmdline 00:05:33.440 ************************************ 00:05:33.440 17:21:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:33.700 * Looking for test storage... 00:05:33.700 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:33.700 17:21:59 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:33.700 17:21:59 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1908025 00:05:33.700 17:21:59 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:33.700 17:21:59 -- app/cmdline.sh@18 -- # waitforlisten 1908025 00:05:33.700 17:21:59 -- common/autotest_common.sh@817 -- # '[' -z 1908025 ']' 00:05:33.700 17:21:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.700 17:21:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:33.700 17:21:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.700 17:21:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:33.700 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:05:33.700 [2024-04-18 17:22:00.050552] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:33.700 [2024-04-18 17:22:00.050659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908025 ] 00:05:33.700 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.700 [2024-04-18 17:22:00.118721] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.700 [2024-04-18 17:22:00.234191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.958 17:22:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:33.958 17:22:00 -- common/autotest_common.sh@850 -- # return 0 00:05:33.958 17:22:00 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:34.528 { 00:05:34.528 "version": "SPDK v24.05-pre git sha1 bd2a0934e", 00:05:34.528 "fields": { 00:05:34.528 "major": 24, 00:05:34.528 "minor": 5, 00:05:34.528 "patch": 0, 00:05:34.528 "suffix": "-pre", 00:05:34.528 "commit": "bd2a0934e" 00:05:34.528 } 00:05:34.528 } 00:05:34.528 17:22:00 -- app/cmdline.sh@22 -- # expected_methods=() 00:05:34.528 17:22:00 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:34.528 17:22:00 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:34.528 17:22:00 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:34.528 17:22:00 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:34.528 17:22:00 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:34.528 17:22:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:34.528 17:22:00 -- common/autotest_common.sh@10 -- # set +x 00:05:34.528 17:22:00 -- app/cmdline.sh@26 -- # sort 00:05:34.528 17:22:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:34.528 17:22:00 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:34.528 17:22:00 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:34.528 17:22:00 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:34.528 17:22:00 -- common/autotest_common.sh@638 -- # local es=0 00:05:34.528 17:22:00 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:34.528 17:22:00 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:34.528 17:22:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:34.528 17:22:00 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:34.528 17:22:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:34.529 17:22:00 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:34.529 17:22:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:34.529 17:22:00 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:34.529 17:22:00 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:05:34.529 17:22:00 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:34.529 request: 00:05:34.529 { 00:05:34.529 "method": "env_dpdk_get_mem_stats", 00:05:34.529 "req_id": 1 00:05:34.529 } 00:05:34.529 Got JSON-RPC error response 00:05:34.529 response: 00:05:34.529 { 00:05:34.529 "code": -32601, 00:05:34.529 "message": "Method not found" 00:05:34.529 } 00:05:34.789 17:22:01 -- common/autotest_common.sh@641 -- # es=1 00:05:34.789 17:22:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:34.789 17:22:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:34.789 17:22:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:34.789 17:22:01 -- app/cmdline.sh@1 -- # killprocess 1908025 00:05:34.789 17:22:01 -- common/autotest_common.sh@936 -- # '[' -z 1908025 ']' 00:05:34.789 17:22:01 -- common/autotest_common.sh@940 -- # kill -0 1908025 00:05:34.789 17:22:01 -- common/autotest_common.sh@941 -- # uname 00:05:34.789 17:22:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:34.789 17:22:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1908025 00:05:34.789 17:22:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:34.789 17:22:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:34.789 17:22:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1908025' 00:05:34.789 killing process with pid 1908025 00:05:34.789 17:22:01 -- common/autotest_common.sh@955 -- # kill 1908025 00:05:34.789 17:22:01 -- common/autotest_common.sh@960 -- # wait 1908025 00:05:35.355 00:05:35.355 real 0m1.646s 00:05:35.355 user 0m2.006s 00:05:35.355 sys 0m0.479s 00:05:35.355 17:22:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.355 17:22:01 -- common/autotest_common.sh@10 -- # set +x 00:05:35.355 ************************************ 00:05:35.355 END TEST app_cmdline 00:05:35.355 ************************************ 00:05:35.355 17:22:01 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:35.355 17:22:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.355 17:22:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.355 17:22:01 -- common/autotest_common.sh@10 -- # set +x 00:05:35.355 ************************************ 00:05:35.355 START TEST version 00:05:35.355 ************************************ 00:05:35.355 17:22:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:35.355 * Looking for test storage... 00:05:35.355 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:35.355 17:22:01 -- app/version.sh@17 -- # get_header_version major 00:05:35.355 17:22:01 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:35.355 17:22:01 -- app/version.sh@14 -- # cut -f2 00:05:35.355 17:22:01 -- app/version.sh@14 -- # tr -d '"' 00:05:35.355 17:22:01 -- app/version.sh@17 -- # major=24 00:05:35.355 17:22:01 -- app/version.sh@18 -- # get_header_version minor 00:05:35.355 17:22:01 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:35.355 17:22:01 -- app/version.sh@14 -- # cut -f2 00:05:35.355 17:22:01 -- app/version.sh@14 -- # tr -d '"' 00:05:35.355 17:22:01 -- app/version.sh@18 -- # minor=5 00:05:35.355 17:22:01 -- app/version.sh@19 -- # get_header_version patch 00:05:35.355 17:22:01 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:35.355 17:22:01 -- app/version.sh@14 -- # cut -f2 00:05:35.355 17:22:01 -- app/version.sh@14 -- # tr -d '"' 00:05:35.355 17:22:01 -- app/version.sh@19 -- # patch=0 00:05:35.355 17:22:01 -- app/version.sh@20 -- # get_header_version suffix 00:05:35.355 17:22:01 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:35.355 17:22:01 -- app/version.sh@14 -- # cut -f2 00:05:35.355 17:22:01 -- app/version.sh@14 -- # tr -d '"' 00:05:35.355 17:22:01 -- app/version.sh@20 -- # suffix=-pre 00:05:35.355 17:22:01 -- app/version.sh@22 -- # version=24.5 00:05:35.355 17:22:01 -- app/version.sh@25 -- # (( patch != 0 )) 00:05:35.355 17:22:01 -- app/version.sh@28 -- # version=24.5rc0 00:05:35.355 17:22:01 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:05:35.355 17:22:01 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:35.355 17:22:01 -- app/version.sh@30 -- # py_version=24.5rc0 00:05:35.355 17:22:01 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:05:35.355 00:05:35.355 real 0m0.111s 00:05:35.355 user 0m0.065s 00:05:35.355 sys 0m0.069s 00:05:35.355 17:22:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.355 17:22:01 -- common/autotest_common.sh@10 -- # set +x 00:05:35.356 ************************************ 00:05:35.356 END TEST version 00:05:35.356 ************************************ 00:05:35.356 17:22:01 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:05:35.356 17:22:01 -- spdk/autotest.sh@194 -- # uname -s 00:05:35.356 17:22:01 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:35.356 17:22:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:35.356 17:22:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:35.356 17:22:01 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:35.356 17:22:01 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:05:35.356 17:22:01 -- spdk/autotest.sh@258 -- # timing_exit lib 00:05:35.356 17:22:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:35.356 17:22:01 -- common/autotest_common.sh@10 -- # set +x 00:05:35.356 17:22:01 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:05:35.356 17:22:01 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:05:35.356 17:22:01 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:05:35.356 17:22:01 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:05:35.356 17:22:01 -- spdk/autotest.sh@281 -- # '[' rdma = rdma ']' 00:05:35.356 17:22:01 -- spdk/autotest.sh@282 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:35.356 17:22:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:35.356 17:22:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.356 17:22:01 -- common/autotest_common.sh@10 -- # set +x 00:05:35.613 ************************************ 00:05:35.613 START TEST nvmf_rdma 00:05:35.613 ************************************ 00:05:35.613 17:22:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:35.613 * Looking for test storage... 00:05:35.613 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:05:35.613 17:22:02 -- nvmf/nvmf.sh@10 -- # uname -s 00:05:35.613 17:22:02 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:35.613 17:22:02 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:35.613 17:22:02 -- nvmf/common.sh@7 -- # uname -s 00:05:35.613 17:22:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.613 17:22:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.613 17:22:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.613 17:22:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.613 17:22:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.613 17:22:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.613 17:22:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.613 17:22:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.613 17:22:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.613 17:22:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.613 17:22:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:35.613 17:22:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:35.613 17:22:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.613 17:22:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.613 17:22:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:35.613 17:22:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.613 17:22:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:35.613 17:22:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.613 17:22:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.613 17:22:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.613 17:22:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.613 17:22:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.613 17:22:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.613 17:22:02 -- paths/export.sh@5 -- # export PATH 00:05:35.613 17:22:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.613 17:22:02 -- nvmf/common.sh@47 -- # : 0 00:05:35.613 17:22:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:35.613 17:22:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:35.613 17:22:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.613 17:22:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.614 17:22:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.614 17:22:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:35.614 17:22:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:35.614 17:22:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:35.614 17:22:02 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:35.614 17:22:02 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:35.614 17:22:02 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:35.614 17:22:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:35.614 17:22:02 -- common/autotest_common.sh@10 -- # set +x 00:05:35.614 17:22:02 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:35.614 17:22:02 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:05:35.614 17:22:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:35.614 17:22:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.614 17:22:02 -- common/autotest_common.sh@10 -- # set +x 00:05:35.614 ************************************ 00:05:35.614 START TEST nvmf_example 00:05:35.614 ************************************ 00:05:35.614 17:22:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:05:35.874 * Looking for test storage... 00:05:35.874 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:35.874 17:22:02 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:35.874 17:22:02 -- nvmf/common.sh@7 -- # uname -s 00:05:35.874 17:22:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.874 17:22:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.874 17:22:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.874 17:22:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.874 17:22:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.874 17:22:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.874 17:22:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.874 17:22:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.874 17:22:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.874 17:22:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.874 17:22:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:35.874 17:22:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:35.874 17:22:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.874 17:22:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.874 17:22:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:35.874 17:22:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.874 17:22:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:35.874 17:22:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.874 17:22:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.874 17:22:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.874 17:22:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.874 17:22:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.874 17:22:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.874 17:22:02 -- paths/export.sh@5 -- # export PATH 00:05:35.874 17:22:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.874 17:22:02 -- nvmf/common.sh@47 -- # : 0 00:05:35.874 17:22:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:35.874 17:22:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:35.874 17:22:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.874 17:22:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.874 17:22:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.874 17:22:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:35.874 17:22:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:35.874 17:22:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:35.874 17:22:02 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:35.874 17:22:02 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:35.874 17:22:02 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:35.874 17:22:02 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:35.874 17:22:02 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:35.874 17:22:02 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:35.874 17:22:02 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:35.874 17:22:02 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:35.874 17:22:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:35.874 17:22:02 -- common/autotest_common.sh@10 -- # set +x 00:05:35.874 17:22:02 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:35.874 17:22:02 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:05:35.874 17:22:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:35.874 17:22:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:05:35.874 17:22:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:05:35.874 17:22:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:05:35.874 17:22:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:35.874 17:22:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:35.874 17:22:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:35.874 17:22:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:05:35.874 17:22:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:05:35.874 17:22:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:05:35.874 17:22:02 -- common/autotest_common.sh@10 -- # set +x 00:05:37.775 17:22:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:37.775 17:22:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:05:37.775 17:22:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:37.775 17:22:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:37.775 17:22:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:37.775 17:22:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:37.775 17:22:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:37.775 17:22:04 -- nvmf/common.sh@295 -- # net_devs=() 00:05:37.775 17:22:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:37.775 17:22:04 -- nvmf/common.sh@296 -- # e810=() 00:05:37.775 17:22:04 -- nvmf/common.sh@296 -- # local -ga e810 00:05:37.775 17:22:04 -- nvmf/common.sh@297 -- # x722=() 00:05:37.775 17:22:04 -- nvmf/common.sh@297 -- # local -ga x722 00:05:37.775 17:22:04 -- nvmf/common.sh@298 -- # mlx=() 00:05:37.775 17:22:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:05:37.775 17:22:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:37.775 17:22:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:37.775 17:22:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:37.775 17:22:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:37.775 17:22:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:37.775 17:22:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:37.775 17:22:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:37.775 17:22:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:37.775 17:22:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:37.775 17:22:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:37.775 17:22:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:37.775 17:22:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:37.775 17:22:04 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:37.775 17:22:04 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:37.775 17:22:04 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:37.775 17:22:04 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:37.775 17:22:04 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:37.775 17:22:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:37.775 17:22:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:37.775 17:22:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:05:37.775 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:05:37.775 17:22:04 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:37.775 17:22:04 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:37.775 17:22:04 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:05:37.775 17:22:04 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:37.775 17:22:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:37.775 17:22:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:05:37.775 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:05:37.775 17:22:04 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:37.776 17:22:04 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:37.776 17:22:04 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:05:37.776 17:22:04 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:37.776 17:22:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:37.776 17:22:04 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:37.776 17:22:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:37.776 17:22:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:37.776 17:22:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:37.776 17:22:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:37.776 17:22:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:05:37.776 Found net devices under 0000:09:00.0: mlx_0_0 00:05:37.776 17:22:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:37.776 17:22:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:37.776 17:22:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:37.776 17:22:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:37.776 17:22:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:37.776 17:22:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:05:37.776 Found net devices under 0000:09:00.1: mlx_0_1 00:05:37.776 17:22:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:37.776 17:22:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:05:37.776 17:22:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:05:37.776 17:22:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:05:37.776 17:22:04 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:05:37.776 17:22:04 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:05:37.776 17:22:04 -- nvmf/common.sh@409 -- # rdma_device_init 00:05:37.776 17:22:04 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:05:37.776 17:22:04 -- nvmf/common.sh@58 -- # uname 00:05:37.776 17:22:04 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:37.776 17:22:04 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:37.776 17:22:04 -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:37.776 17:22:04 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:37.776 17:22:04 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:37.776 17:22:04 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:37.776 17:22:04 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:37.776 17:22:04 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:37.776 17:22:04 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:05:37.776 17:22:04 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:37.776 17:22:04 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:37.776 17:22:04 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:37.776 17:22:04 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:37.776 17:22:04 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:37.776 17:22:04 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:37.776 17:22:04 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:37.776 17:22:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:37.776 17:22:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:37.776 17:22:04 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:37.776 17:22:04 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:37.776 17:22:04 -- nvmf/common.sh@105 -- # continue 2 00:05:37.776 17:22:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:37.776 17:22:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:37.776 17:22:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:37.776 17:22:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:37.776 17:22:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:37.776 17:22:04 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:37.776 17:22:04 -- nvmf/common.sh@105 -- # continue 2 00:05:37.776 17:22:04 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:37.776 17:22:04 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:37.776 17:22:04 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:37.776 17:22:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:37.776 17:22:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:37.776 17:22:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:37.776 17:22:04 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:05:37.776 17:22:04 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:05:37.776 17:22:04 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:37.776 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:37.776 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:05:37.776 altname enp9s0f0np0 00:05:37.776 inet 192.168.100.8/24 scope global mlx_0_0 00:05:37.776 valid_lft forever preferred_lft forever 00:05:37.776 17:22:04 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:37.776 17:22:04 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:37.776 17:22:04 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:37.776 17:22:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:37.776 17:22:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:37.776 17:22:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:37.776 17:22:04 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:05:37.776 17:22:04 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:05:37.776 17:22:04 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:37.776 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:37.776 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:05:37.776 altname enp9s0f1np1 00:05:37.776 inet 192.168.100.9/24 scope global mlx_0_1 00:05:37.776 valid_lft forever preferred_lft forever 00:05:37.776 17:22:04 -- nvmf/common.sh@411 -- # return 0 00:05:37.776 17:22:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:05:37.776 17:22:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:37.776 17:22:04 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:05:37.776 17:22:04 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:05:37.776 17:22:04 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:37.776 17:22:04 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:37.776 17:22:04 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:37.776 17:22:04 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:37.776 17:22:04 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:37.776 17:22:04 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:37.776 17:22:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:37.776 17:22:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:37.776 17:22:04 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:37.776 17:22:04 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:37.776 17:22:04 -- nvmf/common.sh@105 -- # continue 2 00:05:37.776 17:22:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:37.776 17:22:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:37.776 17:22:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:37.776 17:22:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:37.776 17:22:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:37.776 17:22:04 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:37.776 17:22:04 -- nvmf/common.sh@105 -- # continue 2 00:05:37.776 17:22:04 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:37.776 17:22:04 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:37.776 17:22:04 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:37.776 17:22:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:37.776 17:22:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:37.776 17:22:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:37.776 17:22:04 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:37.776 17:22:04 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:37.776 17:22:04 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:37.776 17:22:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:37.776 17:22:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:37.776 17:22:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:37.776 17:22:04 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:05:37.776 192.168.100.9' 00:05:37.776 17:22:04 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:05:37.776 192.168.100.9' 00:05:37.776 17:22:04 -- nvmf/common.sh@446 -- # head -n 1 00:05:37.776 17:22:04 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:37.776 17:22:04 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:05:37.776 192.168.100.9' 00:05:37.776 17:22:04 -- nvmf/common.sh@447 -- # head -n 1 00:05:37.776 17:22:04 -- nvmf/common.sh@447 -- # tail -n +2 00:05:37.776 17:22:04 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:37.776 17:22:04 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:05:37.776 17:22:04 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:37.776 17:22:04 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:05:37.776 17:22:04 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:05:37.776 17:22:04 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:05:37.776 17:22:04 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:37.776 17:22:04 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:37.776 17:22:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:37.776 17:22:04 -- common/autotest_common.sh@10 -- # set +x 00:05:38.039 17:22:04 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:05:38.039 17:22:04 -- target/nvmf_example.sh@34 -- # nvmfpid=1910091 00:05:38.039 17:22:04 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:38.039 17:22:04 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:38.039 17:22:04 -- target/nvmf_example.sh@36 -- # waitforlisten 1910091 00:05:38.039 17:22:04 -- common/autotest_common.sh@817 -- # '[' -z 1910091 ']' 00:05:38.039 17:22:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.039 17:22:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:38.039 17:22:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.039 17:22:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:38.039 17:22:04 -- common/autotest_common.sh@10 -- # set +x 00:05:38.039 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.975 17:22:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:38.975 17:22:05 -- common/autotest_common.sh@850 -- # return 0 00:05:38.975 17:22:05 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:38.975 17:22:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:38.975 17:22:05 -- common/autotest_common.sh@10 -- # set +x 00:05:38.975 17:22:05 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:05:38.975 17:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:38.975 17:22:05 -- common/autotest_common.sh@10 -- # set +x 00:05:39.236 17:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:39.236 17:22:05 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:39.236 17:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:39.236 17:22:05 -- common/autotest_common.sh@10 -- # set +x 00:05:39.236 17:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:39.236 17:22:05 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:39.236 17:22:05 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:39.236 17:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:39.236 17:22:05 -- common/autotest_common.sh@10 -- # set +x 00:05:39.236 17:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:39.236 17:22:05 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:39.236 17:22:05 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:39.236 17:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:39.236 17:22:05 -- common/autotest_common.sh@10 -- # set +x 00:05:39.236 17:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:39.236 17:22:05 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:39.236 17:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:39.236 17:22:05 -- common/autotest_common.sh@10 -- # set +x 00:05:39.236 17:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:39.236 17:22:05 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:39.236 17:22:05 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:39.236 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.476 Initializing NVMe Controllers 00:05:51.476 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:05:51.476 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:51.476 Initialization complete. Launching workers. 00:05:51.476 ======================================================== 00:05:51.476 Latency(us) 00:05:51.476 Device Information : IOPS MiB/s Average min max 00:05:51.476 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 20734.71 80.99 3086.61 793.02 16094.11 00:05:51.476 ======================================================== 00:05:51.476 Total : 20734.71 80.99 3086.61 793.02 16094.11 00:05:51.476 00:05:51.476 17:22:16 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:51.476 17:22:16 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:51.476 17:22:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:05:51.476 17:22:16 -- nvmf/common.sh@117 -- # sync 00:05:51.476 17:22:16 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:05:51.476 17:22:16 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:05:51.476 17:22:16 -- nvmf/common.sh@120 -- # set +e 00:05:51.476 17:22:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:51.476 17:22:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:05:51.476 rmmod nvme_rdma 00:05:51.476 rmmod nvme_fabrics 00:05:51.476 17:22:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:51.476 17:22:16 -- nvmf/common.sh@124 -- # set -e 00:05:51.476 17:22:16 -- nvmf/common.sh@125 -- # return 0 00:05:51.476 17:22:16 -- nvmf/common.sh@478 -- # '[' -n 1910091 ']' 00:05:51.476 17:22:16 -- nvmf/common.sh@479 -- # killprocess 1910091 00:05:51.476 17:22:16 -- common/autotest_common.sh@936 -- # '[' -z 1910091 ']' 00:05:51.476 17:22:16 -- common/autotest_common.sh@940 -- # kill -0 1910091 00:05:51.476 17:22:16 -- common/autotest_common.sh@941 -- # uname 00:05:51.476 17:22:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.476 17:22:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1910091 00:05:51.476 17:22:16 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:05:51.476 17:22:16 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:05:51.476 17:22:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1910091' 00:05:51.476 killing process with pid 1910091 00:05:51.476 17:22:16 -- common/autotest_common.sh@955 -- # kill 1910091 00:05:51.476 17:22:16 -- common/autotest_common.sh@960 -- # wait 1910091 00:05:51.476 nvmf threads initialize successfully 00:05:51.476 bdev subsystem init successfully 00:05:51.476 created a nvmf target service 00:05:51.476 create targets's poll groups done 00:05:51.476 all subsystems of target started 00:05:51.476 nvmf target is running 00:05:51.476 all subsystems of target stopped 00:05:51.476 destroy targets's poll groups done 00:05:51.476 destroyed the nvmf target service 00:05:51.476 bdev subsystem finish successfully 00:05:51.476 nvmf threads destroy successfully 00:05:51.476 17:22:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:05:51.476 17:22:17 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:05:51.476 17:22:17 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:51.476 17:22:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:51.476 17:22:17 -- common/autotest_common.sh@10 -- # set +x 00:05:51.476 00:05:51.476 real 0m15.102s 00:05:51.476 user 0m51.645s 00:05:51.476 sys 0m1.883s 00:05:51.476 17:22:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.476 17:22:17 -- common/autotest_common.sh@10 -- # set +x 00:05:51.476 ************************************ 00:05:51.476 END TEST nvmf_example 00:05:51.476 ************************************ 00:05:51.476 17:22:17 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:05:51.476 17:22:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:51.476 17:22:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.476 17:22:17 -- common/autotest_common.sh@10 -- # set +x 00:05:51.476 ************************************ 00:05:51.476 START TEST nvmf_filesystem 00:05:51.476 ************************************ 00:05:51.476 17:22:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:05:51.476 * Looking for test storage... 00:05:51.476 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:51.476 17:22:17 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:05:51.476 17:22:17 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:51.476 17:22:17 -- common/autotest_common.sh@34 -- # set -e 00:05:51.476 17:22:17 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:51.476 17:22:17 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:51.476 17:22:17 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:05:51.476 17:22:17 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:51.476 17:22:17 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:05:51.476 17:22:17 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:51.476 17:22:17 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:51.476 17:22:17 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:51.476 17:22:17 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:51.476 17:22:17 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:51.476 17:22:17 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:51.476 17:22:17 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:51.476 17:22:17 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:51.476 17:22:17 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:51.476 17:22:17 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:51.476 17:22:17 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:51.476 17:22:17 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:51.476 17:22:17 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:51.476 17:22:17 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:51.476 17:22:17 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:51.476 17:22:17 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:51.476 17:22:17 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:51.476 17:22:17 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:51.476 17:22:17 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:05:51.476 17:22:17 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:51.476 17:22:17 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:51.476 17:22:17 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:51.476 17:22:17 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:51.476 17:22:17 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:51.476 17:22:17 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:51.476 17:22:17 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:51.476 17:22:17 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:51.476 17:22:17 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:51.476 17:22:17 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:51.476 17:22:17 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:51.476 17:22:17 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:51.476 17:22:17 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:51.476 17:22:17 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:51.476 17:22:17 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:51.476 17:22:17 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:51.476 17:22:17 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:05:51.476 17:22:17 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:51.476 17:22:17 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:51.476 17:22:17 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:51.476 17:22:17 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:51.476 17:22:17 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:51.476 17:22:17 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:51.476 17:22:17 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:51.476 17:22:17 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:51.476 17:22:17 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:51.476 17:22:17 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:05:51.476 17:22:17 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:05:51.476 17:22:17 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:51.476 17:22:17 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:05:51.476 17:22:17 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:05:51.476 17:22:17 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:05:51.476 17:22:17 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:05:51.476 17:22:17 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:05:51.476 17:22:17 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:05:51.476 17:22:17 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:05:51.476 17:22:17 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:05:51.476 17:22:17 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:05:51.476 17:22:17 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:05:51.476 17:22:17 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:05:51.476 17:22:17 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:05:51.476 17:22:17 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:05:51.476 17:22:17 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:05:51.476 17:22:17 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:05:51.476 17:22:17 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:05:51.476 17:22:17 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:05:51.476 17:22:17 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:05:51.476 17:22:17 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:05:51.476 17:22:17 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:51.476 17:22:17 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:05:51.476 17:22:17 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:05:51.476 17:22:17 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:05:51.476 17:22:17 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:05:51.476 17:22:17 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:05:51.476 17:22:17 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:05:51.476 17:22:17 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:05:51.476 17:22:17 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:05:51.476 17:22:17 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:05:51.476 17:22:17 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:05:51.476 17:22:17 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:05:51.476 17:22:17 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:51.476 17:22:17 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:05:51.476 17:22:17 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:05:51.476 17:22:17 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:05:51.476 17:22:17 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:05:51.476 17:22:17 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:05:51.477 17:22:17 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:05:51.477 17:22:17 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:51.477 17:22:17 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:05:51.477 17:22:17 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:51.477 17:22:17 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:05:51.477 17:22:17 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:51.477 17:22:17 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:51.477 17:22:17 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:51.477 17:22:17 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:51.477 17:22:17 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:51.477 17:22:17 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:51.477 17:22:17 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:05:51.477 17:22:17 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:51.477 #define SPDK_CONFIG_H 00:05:51.477 #define SPDK_CONFIG_APPS 1 00:05:51.477 #define SPDK_CONFIG_ARCH native 00:05:51.477 #undef SPDK_CONFIG_ASAN 00:05:51.477 #undef SPDK_CONFIG_AVAHI 00:05:51.477 #undef SPDK_CONFIG_CET 00:05:51.477 #define SPDK_CONFIG_COVERAGE 1 00:05:51.477 #define SPDK_CONFIG_CROSS_PREFIX 00:05:51.477 #undef SPDK_CONFIG_CRYPTO 00:05:51.477 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:51.477 #undef SPDK_CONFIG_CUSTOMOCF 00:05:51.477 #undef SPDK_CONFIG_DAOS 00:05:51.477 #define SPDK_CONFIG_DAOS_DIR 00:05:51.477 #define SPDK_CONFIG_DEBUG 1 00:05:51.477 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:51.477 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:05:51.477 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:51.477 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:51.477 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:51.477 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:05:51.477 #define SPDK_CONFIG_EXAMPLES 1 00:05:51.477 #undef SPDK_CONFIG_FC 00:05:51.477 #define SPDK_CONFIG_FC_PATH 00:05:51.477 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:51.477 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:51.477 #undef SPDK_CONFIG_FUSE 00:05:51.477 #undef SPDK_CONFIG_FUZZER 00:05:51.477 #define SPDK_CONFIG_FUZZER_LIB 00:05:51.477 #undef SPDK_CONFIG_GOLANG 00:05:51.477 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:51.477 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:51.477 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:51.477 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:05:51.477 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:51.477 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:51.477 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:51.477 #define SPDK_CONFIG_IDXD 1 00:05:51.477 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:51.477 #undef SPDK_CONFIG_IPSEC_MB 00:05:51.477 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:51.477 #define SPDK_CONFIG_ISAL 1 00:05:51.477 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:51.477 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:51.477 #define SPDK_CONFIG_LIBDIR 00:05:51.477 #undef SPDK_CONFIG_LTO 00:05:51.477 #define SPDK_CONFIG_MAX_LCORES 00:05:51.477 #define SPDK_CONFIG_NVME_CUSE 1 00:05:51.477 #undef SPDK_CONFIG_OCF 00:05:51.477 #define SPDK_CONFIG_OCF_PATH 00:05:51.477 #define SPDK_CONFIG_OPENSSL_PATH 00:05:51.477 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:51.477 #define SPDK_CONFIG_PGO_DIR 00:05:51.477 #undef SPDK_CONFIG_PGO_USE 00:05:51.477 #define SPDK_CONFIG_PREFIX /usr/local 00:05:51.477 #undef SPDK_CONFIG_RAID5F 00:05:51.477 #undef SPDK_CONFIG_RBD 00:05:51.477 #define SPDK_CONFIG_RDMA 1 00:05:51.477 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:51.477 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:51.477 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:51.477 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:51.477 #define SPDK_CONFIG_SHARED 1 00:05:51.477 #undef SPDK_CONFIG_SMA 00:05:51.477 #define SPDK_CONFIG_TESTS 1 00:05:51.477 #undef SPDK_CONFIG_TSAN 00:05:51.477 #define SPDK_CONFIG_UBLK 1 00:05:51.477 #define SPDK_CONFIG_UBSAN 1 00:05:51.477 #undef SPDK_CONFIG_UNIT_TESTS 00:05:51.477 #undef SPDK_CONFIG_URING 00:05:51.477 #define SPDK_CONFIG_URING_PATH 00:05:51.477 #undef SPDK_CONFIG_URING_ZNS 00:05:51.477 #undef SPDK_CONFIG_USDT 00:05:51.477 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:51.477 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:51.477 #undef SPDK_CONFIG_VFIO_USER 00:05:51.477 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:51.477 #define SPDK_CONFIG_VHOST 1 00:05:51.477 #define SPDK_CONFIG_VIRTIO 1 00:05:51.477 #undef SPDK_CONFIG_VTUNE 00:05:51.477 #define SPDK_CONFIG_VTUNE_DIR 00:05:51.477 #define SPDK_CONFIG_WERROR 1 00:05:51.477 #define SPDK_CONFIG_WPDK_DIR 00:05:51.477 #undef SPDK_CONFIG_XNVME 00:05:51.477 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:51.477 17:22:17 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:51.477 17:22:17 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:51.477 17:22:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.477 17:22:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.477 17:22:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.477 17:22:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.477 17:22:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.477 17:22:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.477 17:22:17 -- paths/export.sh@5 -- # export PATH 00:05:51.477 17:22:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.477 17:22:17 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:05:51.477 17:22:17 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:05:51.477 17:22:17 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:05:51.477 17:22:17 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:05:51.477 17:22:17 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:51.477 17:22:17 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:51.477 17:22:17 -- pm/common@67 -- # TEST_TAG=N/A 00:05:51.477 17:22:17 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:05:51.477 17:22:17 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:05:51.477 17:22:17 -- pm/common@71 -- # uname -s 00:05:51.477 17:22:17 -- pm/common@71 -- # PM_OS=Linux 00:05:51.477 17:22:17 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:51.477 17:22:17 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:05:51.477 17:22:17 -- pm/common@76 -- # [[ Linux == Linux ]] 00:05:51.477 17:22:17 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:05:51.477 17:22:17 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:05:51.477 17:22:17 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:51.477 17:22:17 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:51.477 17:22:17 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:05:51.477 17:22:17 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:05:51.477 17:22:17 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:05:51.477 17:22:17 -- common/autotest_common.sh@57 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:05:51.477 17:22:17 -- common/autotest_common.sh@61 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:51.477 17:22:17 -- common/autotest_common.sh@63 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:05:51.477 17:22:17 -- common/autotest_common.sh@65 -- # : 1 00:05:51.477 17:22:17 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:51.477 17:22:17 -- common/autotest_common.sh@67 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:05:51.477 17:22:17 -- common/autotest_common.sh@69 -- # : 00:05:51.477 17:22:17 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:05:51.477 17:22:17 -- common/autotest_common.sh@71 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:05:51.477 17:22:17 -- common/autotest_common.sh@73 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:05:51.477 17:22:17 -- common/autotest_common.sh@75 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:05:51.477 17:22:17 -- common/autotest_common.sh@77 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:51.477 17:22:17 -- common/autotest_common.sh@79 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:05:51.477 17:22:17 -- common/autotest_common.sh@81 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:05:51.477 17:22:17 -- common/autotest_common.sh@83 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:05:51.477 17:22:17 -- common/autotest_common.sh@85 -- # : 1 00:05:51.477 17:22:17 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:05:51.477 17:22:17 -- common/autotest_common.sh@87 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:05:51.477 17:22:17 -- common/autotest_common.sh@89 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:05:51.477 17:22:17 -- common/autotest_common.sh@91 -- # : 1 00:05:51.477 17:22:17 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:05:51.477 17:22:17 -- common/autotest_common.sh@93 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:05:51.477 17:22:17 -- common/autotest_common.sh@95 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:51.477 17:22:17 -- common/autotest_common.sh@97 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:05:51.477 17:22:17 -- common/autotest_common.sh@99 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:05:51.477 17:22:17 -- common/autotest_common.sh@101 -- # : rdma 00:05:51.477 17:22:17 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:51.477 17:22:17 -- common/autotest_common.sh@103 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:05:51.477 17:22:17 -- common/autotest_common.sh@105 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:05:51.477 17:22:17 -- common/autotest_common.sh@107 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:05:51.477 17:22:17 -- common/autotest_common.sh@109 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:05:51.477 17:22:17 -- common/autotest_common.sh@111 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:05:51.477 17:22:17 -- common/autotest_common.sh@113 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:05:51.477 17:22:17 -- common/autotest_common.sh@115 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:05:51.477 17:22:17 -- common/autotest_common.sh@117 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:51.477 17:22:17 -- common/autotest_common.sh@119 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:05:51.477 17:22:17 -- common/autotest_common.sh@121 -- # : 1 00:05:51.477 17:22:17 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:05:51.477 17:22:17 -- common/autotest_common.sh@123 -- # : 00:05:51.477 17:22:17 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:51.477 17:22:17 -- common/autotest_common.sh@125 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:05:51.477 17:22:17 -- common/autotest_common.sh@127 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:05:51.477 17:22:17 -- common/autotest_common.sh@129 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:05:51.477 17:22:17 -- common/autotest_common.sh@131 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:05:51.477 17:22:17 -- common/autotest_common.sh@133 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:05:51.477 17:22:17 -- common/autotest_common.sh@135 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:05:51.477 17:22:17 -- common/autotest_common.sh@137 -- # : 00:05:51.477 17:22:17 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:05:51.477 17:22:17 -- common/autotest_common.sh@139 -- # : true 00:05:51.477 17:22:17 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:05:51.477 17:22:17 -- common/autotest_common.sh@141 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:05:51.477 17:22:17 -- common/autotest_common.sh@143 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:05:51.477 17:22:17 -- common/autotest_common.sh@145 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:05:51.477 17:22:17 -- common/autotest_common.sh@147 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:05:51.477 17:22:17 -- common/autotest_common.sh@149 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:05:51.477 17:22:17 -- common/autotest_common.sh@151 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:05:51.477 17:22:17 -- common/autotest_common.sh@153 -- # : mlx5 00:05:51.477 17:22:17 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:05:51.477 17:22:17 -- common/autotest_common.sh@155 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:05:51.477 17:22:17 -- common/autotest_common.sh@157 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:05:51.477 17:22:17 -- common/autotest_common.sh@159 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:05:51.477 17:22:17 -- common/autotest_common.sh@161 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:05:51.477 17:22:17 -- common/autotest_common.sh@163 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:05:51.477 17:22:17 -- common/autotest_common.sh@166 -- # : 00:05:51.477 17:22:17 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:05:51.477 17:22:17 -- common/autotest_common.sh@168 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:05:51.477 17:22:17 -- common/autotest_common.sh@170 -- # : 0 00:05:51.477 17:22:17 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:51.477 17:22:17 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:05:51.477 17:22:17 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:05:51.477 17:22:17 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:05:51.477 17:22:17 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:05:51.477 17:22:17 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:51.477 17:22:17 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:51.477 17:22:17 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:51.477 17:22:17 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:51.477 17:22:17 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:51.477 17:22:17 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:51.477 17:22:17 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:05:51.477 17:22:17 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:05:51.477 17:22:17 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:51.477 17:22:17 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:05:51.477 17:22:17 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:51.477 17:22:17 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:51.478 17:22:17 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:51.478 17:22:17 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:51.478 17:22:17 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:51.478 17:22:17 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:05:51.478 17:22:17 -- common/autotest_common.sh@199 -- # cat 00:05:51.478 17:22:17 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:05:51.478 17:22:17 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:51.478 17:22:17 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:51.478 17:22:17 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:51.478 17:22:17 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:51.478 17:22:17 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:05:51.478 17:22:17 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:05:51.478 17:22:17 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:05:51.478 17:22:17 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:05:51.478 17:22:17 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:05:51.478 17:22:17 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:05:51.478 17:22:17 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:51.478 17:22:17 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:51.478 17:22:17 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:51.478 17:22:17 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:51.478 17:22:17 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:51.478 17:22:17 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:51.478 17:22:17 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:51.478 17:22:17 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:51.478 17:22:17 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:05:51.478 17:22:17 -- common/autotest_common.sh@252 -- # export valgrind= 00:05:51.478 17:22:17 -- common/autotest_common.sh@252 -- # valgrind= 00:05:51.478 17:22:17 -- common/autotest_common.sh@258 -- # uname -s 00:05:51.478 17:22:17 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:05:51.478 17:22:17 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:05:51.478 17:22:17 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:05:51.478 17:22:17 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:05:51.478 17:22:17 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:05:51.478 17:22:17 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:05:51.478 17:22:17 -- common/autotest_common.sh@268 -- # MAKE=make 00:05:51.478 17:22:17 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:05:51.478 17:22:17 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:05:51.478 17:22:17 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:05:51.478 17:22:17 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:05:51.478 17:22:17 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:05:51.478 17:22:17 -- common/autotest_common.sh@289 -- # for i in "$@" 00:05:51.478 17:22:17 -- common/autotest_common.sh@290 -- # case "$i" in 00:05:51.478 17:22:17 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=rdma 00:05:51.478 17:22:17 -- common/autotest_common.sh@307 -- # [[ -z 1911672 ]] 00:05:51.478 17:22:17 -- common/autotest_common.sh@307 -- # kill -0 1911672 00:05:51.478 17:22:17 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:05:51.478 17:22:17 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:05:51.478 17:22:17 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:05:51.478 17:22:17 -- common/autotest_common.sh@320 -- # local mount target_dir 00:05:51.478 17:22:17 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:05:51.478 17:22:17 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:05:51.478 17:22:17 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:05:51.478 17:22:17 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:05:51.478 17:22:17 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.HSLz7H 00:05:51.478 17:22:17 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:51.478 17:22:17 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:05:51.478 17:22:17 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:05:51.478 17:22:17 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.HSLz7H/tests/target /tmp/spdk.HSLz7H 00:05:51.478 17:22:17 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:05:51.478 17:22:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:51.478 17:22:17 -- common/autotest_common.sh@316 -- # df -T 00:05:51.478 17:22:17 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:05:51.478 17:22:17 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:05:51.478 17:22:17 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:05:51.478 17:22:17 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:05:51.478 17:22:17 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:05:51.478 17:22:17 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:05:51.478 17:22:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:51.478 17:22:17 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:05:51.478 17:22:17 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:05:51.478 17:22:17 -- common/autotest_common.sh@351 -- # avails["$mount"]=996749312 00:05:51.478 17:22:17 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:05:51.478 17:22:17 -- common/autotest_common.sh@352 -- # uses["$mount"]=4287680512 00:05:51.478 17:22:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:51.478 17:22:17 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:05:51.478 17:22:17 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:05:51.478 17:22:17 -- common/autotest_common.sh@351 -- # avails["$mount"]=50037055488 00:05:51.478 17:22:17 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61994717184 00:05:51.478 17:22:17 -- common/autotest_common.sh@352 -- # uses["$mount"]=11957661696 00:05:51.478 17:22:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:51.478 17:22:17 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:51.478 17:22:17 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:51.478 17:22:17 -- common/autotest_common.sh@351 -- # avails["$mount"]=30943817728 00:05:51.478 17:22:17 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997356544 00:05:51.478 17:22:17 -- common/autotest_common.sh@352 -- # uses["$mount"]=53538816 00:05:51.478 17:22:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:51.478 17:22:17 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:51.478 17:22:17 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:51.478 17:22:17 -- common/autotest_common.sh@351 -- # avails["$mount"]=12390170624 00:05:51.478 17:22:17 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12398944256 00:05:51.478 17:22:17 -- common/autotest_common.sh@352 -- # uses["$mount"]=8773632 00:05:51.478 17:22:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:51.478 17:22:17 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:51.478 17:22:17 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:51.478 17:22:17 -- common/autotest_common.sh@351 -- # avails["$mount"]=30996914176 00:05:51.478 17:22:17 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997360640 00:05:51.478 17:22:17 -- common/autotest_common.sh@352 -- # uses["$mount"]=446464 00:05:51.478 17:22:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:51.478 17:22:17 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:05:51.478 17:22:17 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:05:51.478 17:22:17 -- common/autotest_common.sh@351 -- # avails["$mount"]=6199463936 00:05:51.478 17:22:17 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6199468032 00:05:51.478 17:22:17 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:05:51.478 17:22:17 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:05:51.478 17:22:17 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:05:51.478 * Looking for test storage... 00:05:51.478 17:22:17 -- common/autotest_common.sh@357 -- # local target_space new_size 00:05:51.478 17:22:17 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:05:51.478 17:22:17 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:51.478 17:22:17 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:51.478 17:22:17 -- common/autotest_common.sh@361 -- # mount=/ 00:05:51.478 17:22:17 -- common/autotest_common.sh@363 -- # target_space=50037055488 00:05:51.478 17:22:17 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:05:51.478 17:22:17 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:05:51.478 17:22:17 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:05:51.478 17:22:17 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:05:51.478 17:22:17 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:05:51.478 17:22:17 -- common/autotest_common.sh@370 -- # new_size=14172254208 00:05:51.478 17:22:17 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:51.478 17:22:17 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:51.478 17:22:17 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:51.478 17:22:17 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:51.478 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:51.478 17:22:17 -- common/autotest_common.sh@378 -- # return 0 00:05:51.478 17:22:17 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:05:51.478 17:22:17 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:05:51.478 17:22:17 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:51.478 17:22:17 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:51.478 17:22:17 -- common/autotest_common.sh@1673 -- # true 00:05:51.478 17:22:17 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:05:51.478 17:22:17 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:51.478 17:22:17 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:51.478 17:22:17 -- common/autotest_common.sh@27 -- # exec 00:05:51.478 17:22:17 -- common/autotest_common.sh@29 -- # exec 00:05:51.478 17:22:17 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:51.478 17:22:17 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:51.478 17:22:17 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:51.478 17:22:17 -- common/autotest_common.sh@18 -- # set -x 00:05:51.478 17:22:17 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.478 17:22:17 -- nvmf/common.sh@7 -- # uname -s 00:05:51.478 17:22:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.478 17:22:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.478 17:22:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.478 17:22:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.478 17:22:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.478 17:22:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.478 17:22:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.478 17:22:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.478 17:22:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.478 17:22:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.478 17:22:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:51.478 17:22:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:51.478 17:22:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.478 17:22:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.478 17:22:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:51.478 17:22:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.478 17:22:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:51.478 17:22:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.478 17:22:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.478 17:22:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.478 17:22:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.478 17:22:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.478 17:22:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.478 17:22:17 -- paths/export.sh@5 -- # export PATH 00:05:51.478 17:22:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.478 17:22:17 -- nvmf/common.sh@47 -- # : 0 00:05:51.478 17:22:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:51.478 17:22:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:51.478 17:22:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.478 17:22:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.478 17:22:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.478 17:22:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:51.478 17:22:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:51.478 17:22:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:51.478 17:22:17 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:51.478 17:22:17 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:51.478 17:22:17 -- target/filesystem.sh@15 -- # nvmftestinit 00:05:51.478 17:22:17 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:05:51.478 17:22:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:51.478 17:22:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:05:51.478 17:22:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:05:51.478 17:22:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:05:51.478 17:22:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.478 17:22:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:51.478 17:22:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:51.478 17:22:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:05:51.478 17:22:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:05:51.478 17:22:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:05:51.478 17:22:17 -- common/autotest_common.sh@10 -- # set +x 00:05:53.381 17:22:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:53.381 17:22:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:05:53.381 17:22:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:53.381 17:22:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:53.381 17:22:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:53.381 17:22:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:53.381 17:22:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:53.381 17:22:19 -- nvmf/common.sh@295 -- # net_devs=() 00:05:53.381 17:22:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:53.381 17:22:19 -- nvmf/common.sh@296 -- # e810=() 00:05:53.381 17:22:19 -- nvmf/common.sh@296 -- # local -ga e810 00:05:53.381 17:22:19 -- nvmf/common.sh@297 -- # x722=() 00:05:53.381 17:22:19 -- nvmf/common.sh@297 -- # local -ga x722 00:05:53.381 17:22:19 -- nvmf/common.sh@298 -- # mlx=() 00:05:53.381 17:22:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:05:53.381 17:22:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:53.381 17:22:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:53.381 17:22:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:53.381 17:22:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:53.381 17:22:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:53.381 17:22:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:53.381 17:22:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:53.381 17:22:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:53.381 17:22:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:53.381 17:22:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:53.381 17:22:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:53.381 17:22:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:53.381 17:22:19 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:53.381 17:22:19 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:53.381 17:22:19 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:53.381 17:22:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:53.381 17:22:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:53.381 17:22:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:05:53.381 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:05:53.381 17:22:19 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:53.381 17:22:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:53.381 17:22:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:05:53.381 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:05:53.381 17:22:19 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:53.381 17:22:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:53.381 17:22:19 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:53.381 17:22:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.381 17:22:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:53.381 17:22:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.381 17:22:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:05:53.381 Found net devices under 0000:09:00.0: mlx_0_0 00:05:53.381 17:22:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.381 17:22:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:53.381 17:22:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.381 17:22:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:53.381 17:22:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.381 17:22:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:05:53.381 Found net devices under 0000:09:00.1: mlx_0_1 00:05:53.381 17:22:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.381 17:22:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:05:53.381 17:22:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:05:53.381 17:22:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@409 -- # rdma_device_init 00:05:53.381 17:22:19 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:05:53.381 17:22:19 -- nvmf/common.sh@58 -- # uname 00:05:53.381 17:22:19 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:53.381 17:22:19 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:53.381 17:22:19 -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:53.381 17:22:19 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:53.381 17:22:19 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:53.381 17:22:19 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:53.381 17:22:19 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:53.381 17:22:19 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:53.381 17:22:19 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:05:53.381 17:22:19 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:53.381 17:22:19 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:53.381 17:22:19 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:53.381 17:22:19 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:53.381 17:22:19 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:53.381 17:22:19 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:53.381 17:22:19 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:53.381 17:22:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:53.381 17:22:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:53.381 17:22:19 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:53.381 17:22:19 -- nvmf/common.sh@105 -- # continue 2 00:05:53.381 17:22:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:53.381 17:22:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:53.381 17:22:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:53.381 17:22:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:53.381 17:22:19 -- nvmf/common.sh@105 -- # continue 2 00:05:53.381 17:22:19 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:53.381 17:22:19 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:53.381 17:22:19 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:53.381 17:22:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:53.381 17:22:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:53.381 17:22:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:53.381 17:22:19 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:05:53.381 17:22:19 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:53.381 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:53.381 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:05:53.381 altname enp9s0f0np0 00:05:53.381 inet 192.168.100.8/24 scope global mlx_0_0 00:05:53.381 valid_lft forever preferred_lft forever 00:05:53.381 17:22:19 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:53.381 17:22:19 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:53.381 17:22:19 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:53.381 17:22:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:53.381 17:22:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:53.381 17:22:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:53.381 17:22:19 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:05:53.381 17:22:19 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:53.381 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:53.381 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:05:53.381 altname enp9s0f1np1 00:05:53.381 inet 192.168.100.9/24 scope global mlx_0_1 00:05:53.381 valid_lft forever preferred_lft forever 00:05:53.381 17:22:19 -- nvmf/common.sh@411 -- # return 0 00:05:53.381 17:22:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:05:53.381 17:22:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:53.381 17:22:19 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:05:53.381 17:22:19 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:05:53.381 17:22:19 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:53.381 17:22:19 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:53.381 17:22:19 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:53.381 17:22:19 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:53.382 17:22:19 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:53.382 17:22:19 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:53.382 17:22:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:53.382 17:22:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:53.382 17:22:19 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:53.382 17:22:19 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:53.382 17:22:19 -- nvmf/common.sh@105 -- # continue 2 00:05:53.382 17:22:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:53.382 17:22:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:53.382 17:22:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:53.382 17:22:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:53.382 17:22:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:53.382 17:22:19 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:53.382 17:22:19 -- nvmf/common.sh@105 -- # continue 2 00:05:53.382 17:22:19 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:53.382 17:22:19 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:53.382 17:22:19 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:53.382 17:22:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:53.382 17:22:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:53.382 17:22:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:53.382 17:22:19 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:53.382 17:22:19 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:53.382 17:22:19 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:53.382 17:22:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:53.382 17:22:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:53.382 17:22:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:53.382 17:22:19 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:05:53.382 192.168.100.9' 00:05:53.382 17:22:19 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:05:53.382 192.168.100.9' 00:05:53.382 17:22:19 -- nvmf/common.sh@446 -- # head -n 1 00:05:53.382 17:22:19 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:53.382 17:22:19 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:05:53.382 192.168.100.9' 00:05:53.382 17:22:19 -- nvmf/common.sh@447 -- # tail -n +2 00:05:53.382 17:22:19 -- nvmf/common.sh@447 -- # head -n 1 00:05:53.382 17:22:19 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:53.382 17:22:19 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:05:53.382 17:22:19 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:53.382 17:22:19 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:05:53.382 17:22:19 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:05:53.382 17:22:19 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:05:53.382 17:22:19 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:05:53.382 17:22:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:53.382 17:22:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.382 17:22:19 -- common/autotest_common.sh@10 -- # set +x 00:05:53.382 ************************************ 00:05:53.382 START TEST nvmf_filesystem_no_in_capsule 00:05:53.382 ************************************ 00:05:53.382 17:22:19 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:05:53.382 17:22:19 -- target/filesystem.sh@47 -- # in_capsule=0 00:05:53.382 17:22:19 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:53.382 17:22:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:05:53.382 17:22:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:53.382 17:22:19 -- common/autotest_common.sh@10 -- # set +x 00:05:53.382 17:22:19 -- nvmf/common.sh@470 -- # nvmfpid=1913376 00:05:53.382 17:22:19 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:53.382 17:22:19 -- nvmf/common.sh@471 -- # waitforlisten 1913376 00:05:53.382 17:22:19 -- common/autotest_common.sh@817 -- # '[' -z 1913376 ']' 00:05:53.382 17:22:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.382 17:22:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:53.382 17:22:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.382 17:22:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:53.382 17:22:19 -- common/autotest_common.sh@10 -- # set +x 00:05:53.382 [2024-04-18 17:22:19.776743] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:05:53.382 [2024-04-18 17:22:19.776831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:53.382 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.382 [2024-04-18 17:22:19.842010] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.642 [2024-04-18 17:22:19.962365] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:53.642 [2024-04-18 17:22:19.962440] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:53.642 [2024-04-18 17:22:19.962467] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.642 [2024-04-18 17:22:19.962480] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.642 [2024-04-18 17:22:19.962492] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:53.642 [2024-04-18 17:22:19.962572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.642 [2024-04-18 17:22:19.962600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.642 [2024-04-18 17:22:19.962671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.642 [2024-04-18 17:22:19.963073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.209 17:22:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:54.209 17:22:20 -- common/autotest_common.sh@850 -- # return 0 00:05:54.209 17:22:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:05:54.209 17:22:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:54.209 17:22:20 -- common/autotest_common.sh@10 -- # set +x 00:05:54.209 17:22:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:54.209 17:22:20 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:54.209 17:22:20 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:05:54.209 17:22:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.209 17:22:20 -- common/autotest_common.sh@10 -- # set +x 00:05:54.209 [2024-04-18 17:22:20.744437] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:54.468 [2024-04-18 17:22:20.767027] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfefb60/0xff4050) succeed. 00:05:54.468 [2024-04-18 17:22:20.777531] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xff1150/0x10356e0) succeed. 00:05:54.468 17:22:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.468 17:22:20 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:54.468 17:22:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.469 17:22:20 -- common/autotest_common.sh@10 -- # set +x 00:05:54.729 Malloc1 00:05:54.729 17:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.729 17:22:21 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:54.729 17:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.729 17:22:21 -- common/autotest_common.sh@10 -- # set +x 00:05:54.729 17:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.729 17:22:21 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:54.729 17:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.729 17:22:21 -- common/autotest_common.sh@10 -- # set +x 00:05:54.729 17:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.729 17:22:21 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:54.729 17:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.729 17:22:21 -- common/autotest_common.sh@10 -- # set +x 00:05:54.729 [2024-04-18 17:22:21.072459] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:54.729 17:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.729 17:22:21 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:54.729 17:22:21 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:05:54.729 17:22:21 -- common/autotest_common.sh@1365 -- # local bdev_info 00:05:54.729 17:22:21 -- common/autotest_common.sh@1366 -- # local bs 00:05:54.729 17:22:21 -- common/autotest_common.sh@1367 -- # local nb 00:05:54.729 17:22:21 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:54.729 17:22:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.729 17:22:21 -- common/autotest_common.sh@10 -- # set +x 00:05:54.729 17:22:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.729 17:22:21 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:05:54.729 { 00:05:54.729 "name": "Malloc1", 00:05:54.729 "aliases": [ 00:05:54.729 "2e988fd2-a85c-4c0e-89b4-22780d019230" 00:05:54.729 ], 00:05:54.729 "product_name": "Malloc disk", 00:05:54.729 "block_size": 512, 00:05:54.729 "num_blocks": 1048576, 00:05:54.729 "uuid": "2e988fd2-a85c-4c0e-89b4-22780d019230", 00:05:54.729 "assigned_rate_limits": { 00:05:54.729 "rw_ios_per_sec": 0, 00:05:54.729 "rw_mbytes_per_sec": 0, 00:05:54.729 "r_mbytes_per_sec": 0, 00:05:54.729 "w_mbytes_per_sec": 0 00:05:54.729 }, 00:05:54.729 "claimed": true, 00:05:54.729 "claim_type": "exclusive_write", 00:05:54.729 "zoned": false, 00:05:54.729 "supported_io_types": { 00:05:54.729 "read": true, 00:05:54.729 "write": true, 00:05:54.729 "unmap": true, 00:05:54.729 "write_zeroes": true, 00:05:54.729 "flush": true, 00:05:54.729 "reset": true, 00:05:54.729 "compare": false, 00:05:54.729 "compare_and_write": false, 00:05:54.729 "abort": true, 00:05:54.729 "nvme_admin": false, 00:05:54.729 "nvme_io": false 00:05:54.729 }, 00:05:54.729 "memory_domains": [ 00:05:54.729 { 00:05:54.729 "dma_device_id": "system", 00:05:54.729 "dma_device_type": 1 00:05:54.729 }, 00:05:54.729 { 00:05:54.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.729 "dma_device_type": 2 00:05:54.729 } 00:05:54.729 ], 00:05:54.729 "driver_specific": {} 00:05:54.729 } 00:05:54.729 ]' 00:05:54.729 17:22:21 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:05:54.729 17:22:21 -- common/autotest_common.sh@1369 -- # bs=512 00:05:54.729 17:22:21 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:05:54.729 17:22:21 -- common/autotest_common.sh@1370 -- # nb=1048576 00:05:54.729 17:22:21 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:05:54.729 17:22:21 -- common/autotest_common.sh@1374 -- # echo 512 00:05:54.729 17:22:21 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:54.729 17:22:21 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:05:58.928 17:22:25 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:58.928 17:22:25 -- common/autotest_common.sh@1184 -- # local i=0 00:05:58.928 17:22:25 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:05:58.928 17:22:25 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:05:58.928 17:22:25 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:00.835 17:22:27 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:00.835 17:22:27 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:00.835 17:22:27 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:00.835 17:22:27 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:00.835 17:22:27 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:00.835 17:22:27 -- common/autotest_common.sh@1194 -- # return 0 00:06:00.835 17:22:27 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:00.835 17:22:27 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:00.835 17:22:27 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:00.835 17:22:27 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:00.835 17:22:27 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:00.835 17:22:27 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:00.835 17:22:27 -- setup/common.sh@80 -- # echo 536870912 00:06:00.835 17:22:27 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:00.835 17:22:27 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:00.835 17:22:27 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:00.835 17:22:27 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:00.835 17:22:27 -- target/filesystem.sh@69 -- # partprobe 00:06:00.835 17:22:27 -- target/filesystem.sh@70 -- # sleep 1 00:06:02.214 17:22:28 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:02.214 17:22:28 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:02.214 17:22:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:02.214 17:22:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.214 17:22:28 -- common/autotest_common.sh@10 -- # set +x 00:06:02.214 ************************************ 00:06:02.214 START TEST filesystem_ext4 00:06:02.214 ************************************ 00:06:02.214 17:22:28 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:02.214 17:22:28 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:02.214 17:22:28 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:02.214 17:22:28 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:02.214 17:22:28 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:02.214 17:22:28 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:02.214 17:22:28 -- common/autotest_common.sh@914 -- # local i=0 00:06:02.214 17:22:28 -- common/autotest_common.sh@915 -- # local force 00:06:02.214 17:22:28 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:02.214 17:22:28 -- common/autotest_common.sh@918 -- # force=-F 00:06:02.214 17:22:28 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:02.214 mke2fs 1.46.5 (30-Dec-2021) 00:06:02.214 Discarding device blocks: 0/522240 done 00:06:02.214 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:02.214 Filesystem UUID: 71c5d419-585f-4f00-aa5d-1cf12137668d 00:06:02.214 Superblock backups stored on blocks: 00:06:02.214 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:02.214 00:06:02.214 Allocating group tables: 0/64 done 00:06:02.214 Writing inode tables: 0/64 done 00:06:02.214 Creating journal (8192 blocks): done 00:06:02.214 Writing superblocks and filesystem accounting information: 0/64 done 00:06:02.214 00:06:02.214 17:22:28 -- common/autotest_common.sh@931 -- # return 0 00:06:02.214 17:22:28 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:02.214 17:22:28 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:02.214 17:22:28 -- target/filesystem.sh@25 -- # sync 00:06:02.214 17:22:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:02.214 17:22:28 -- target/filesystem.sh@27 -- # sync 00:06:02.214 17:22:28 -- target/filesystem.sh@29 -- # i=0 00:06:02.214 17:22:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:02.214 17:22:28 -- target/filesystem.sh@37 -- # kill -0 1913376 00:06:02.214 17:22:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:02.214 17:22:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:02.214 17:22:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:02.214 17:22:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:02.214 00:06:02.214 real 0m0.153s 00:06:02.214 user 0m0.010s 00:06:02.214 sys 0m0.029s 00:06:02.214 17:22:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.214 17:22:28 -- common/autotest_common.sh@10 -- # set +x 00:06:02.214 ************************************ 00:06:02.214 END TEST filesystem_ext4 00:06:02.214 ************************************ 00:06:02.214 17:22:28 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:02.214 17:22:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:02.214 17:22:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.214 17:22:28 -- common/autotest_common.sh@10 -- # set +x 00:06:02.214 ************************************ 00:06:02.214 START TEST filesystem_btrfs 00:06:02.214 ************************************ 00:06:02.214 17:22:28 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:02.214 17:22:28 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:02.214 17:22:28 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:02.214 17:22:28 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:02.214 17:22:28 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:02.214 17:22:28 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:02.214 17:22:28 -- common/autotest_common.sh@914 -- # local i=0 00:06:02.214 17:22:28 -- common/autotest_common.sh@915 -- # local force 00:06:02.214 17:22:28 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:02.214 17:22:28 -- common/autotest_common.sh@920 -- # force=-f 00:06:02.214 17:22:28 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:02.474 btrfs-progs v6.6.2 00:06:02.474 See https://btrfs.readthedocs.io for more information. 00:06:02.474 00:06:02.474 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:02.474 NOTE: several default settings have changed in version 5.15, please make sure 00:06:02.474 this does not affect your deployments: 00:06:02.474 - DUP for metadata (-m dup) 00:06:02.474 - enabled no-holes (-O no-holes) 00:06:02.474 - enabled free-space-tree (-R free-space-tree) 00:06:02.474 00:06:02.474 Label: (null) 00:06:02.474 UUID: 3a9a2197-6e27-448e-8397-24d0cddb6c47 00:06:02.474 Node size: 16384 00:06:02.474 Sector size: 4096 00:06:02.474 Filesystem size: 510.00MiB 00:06:02.474 Block group profiles: 00:06:02.474 Data: single 8.00MiB 00:06:02.474 Metadata: DUP 32.00MiB 00:06:02.474 System: DUP 8.00MiB 00:06:02.474 SSD detected: yes 00:06:02.474 Zoned device: no 00:06:02.474 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:02.474 Runtime features: free-space-tree 00:06:02.474 Checksum: crc32c 00:06:02.474 Number of devices: 1 00:06:02.474 Devices: 00:06:02.474 ID SIZE PATH 00:06:02.474 1 510.00MiB /dev/nvme0n1p1 00:06:02.474 00:06:02.474 17:22:28 -- common/autotest_common.sh@931 -- # return 0 00:06:02.474 17:22:28 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:02.474 17:22:28 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:02.474 17:22:28 -- target/filesystem.sh@25 -- # sync 00:06:02.474 17:22:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:02.474 17:22:28 -- target/filesystem.sh@27 -- # sync 00:06:02.474 17:22:28 -- target/filesystem.sh@29 -- # i=0 00:06:02.475 17:22:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:02.475 17:22:28 -- target/filesystem.sh@37 -- # kill -0 1913376 00:06:02.475 17:22:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:02.475 17:22:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:02.475 17:22:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:02.475 17:22:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:02.475 00:06:02.475 real 0m0.161s 00:06:02.475 user 0m0.017s 00:06:02.475 sys 0m0.033s 00:06:02.475 17:22:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.475 17:22:28 -- common/autotest_common.sh@10 -- # set +x 00:06:02.475 ************************************ 00:06:02.475 END TEST filesystem_btrfs 00:06:02.475 ************************************ 00:06:02.475 17:22:28 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:02.475 17:22:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:02.475 17:22:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.475 17:22:28 -- common/autotest_common.sh@10 -- # set +x 00:06:02.475 ************************************ 00:06:02.475 START TEST filesystem_xfs 00:06:02.475 ************************************ 00:06:02.475 17:22:28 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:02.475 17:22:28 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:02.475 17:22:28 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:02.475 17:22:28 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:02.475 17:22:28 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:02.475 17:22:28 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:02.475 17:22:28 -- common/autotest_common.sh@914 -- # local i=0 00:06:02.475 17:22:28 -- common/autotest_common.sh@915 -- # local force 00:06:02.475 17:22:28 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:02.475 17:22:28 -- common/autotest_common.sh@920 -- # force=-f 00:06:02.475 17:22:28 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:02.736 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:02.736 = sectsz=512 attr=2, projid32bit=1 00:06:02.736 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:02.736 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:02.736 data = bsize=4096 blocks=130560, imaxpct=25 00:06:02.736 = sunit=0 swidth=0 blks 00:06:02.736 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:02.736 log =internal log bsize=4096 blocks=16384, version=2 00:06:02.736 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:02.736 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:02.736 Discarding blocks...Done. 00:06:02.736 17:22:29 -- common/autotest_common.sh@931 -- # return 0 00:06:02.736 17:22:29 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:02.736 17:22:29 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:02.736 17:22:29 -- target/filesystem.sh@25 -- # sync 00:06:02.736 17:22:29 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:02.736 17:22:29 -- target/filesystem.sh@27 -- # sync 00:06:02.736 17:22:29 -- target/filesystem.sh@29 -- # i=0 00:06:02.736 17:22:29 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:02.736 17:22:29 -- target/filesystem.sh@37 -- # kill -0 1913376 00:06:02.736 17:22:29 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:02.736 17:22:29 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:02.736 17:22:29 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:02.736 17:22:29 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:02.736 00:06:02.736 real 0m0.180s 00:06:02.736 user 0m0.014s 00:06:02.736 sys 0m0.027s 00:06:02.736 17:22:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.736 17:22:29 -- common/autotest_common.sh@10 -- # set +x 00:06:02.736 ************************************ 00:06:02.736 END TEST filesystem_xfs 00:06:02.736 ************************************ 00:06:02.736 17:22:29 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:02.736 17:22:29 -- target/filesystem.sh@93 -- # sync 00:06:02.736 17:22:29 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:05.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:05.272 17:22:31 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:05.272 17:22:31 -- common/autotest_common.sh@1205 -- # local i=0 00:06:05.272 17:22:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:05.272 17:22:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:05.272 17:22:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:05.272 17:22:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:05.272 17:22:31 -- common/autotest_common.sh@1217 -- # return 0 00:06:05.272 17:22:31 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:05.272 17:22:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:05.272 17:22:31 -- common/autotest_common.sh@10 -- # set +x 00:06:05.272 17:22:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:05.272 17:22:31 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:05.272 17:22:31 -- target/filesystem.sh@101 -- # killprocess 1913376 00:06:05.272 17:22:31 -- common/autotest_common.sh@936 -- # '[' -z 1913376 ']' 00:06:05.272 17:22:31 -- common/autotest_common.sh@940 -- # kill -0 1913376 00:06:05.272 17:22:31 -- common/autotest_common.sh@941 -- # uname 00:06:05.272 17:22:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:05.272 17:22:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1913376 00:06:05.272 17:22:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:05.272 17:22:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:05.272 17:22:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1913376' 00:06:05.272 killing process with pid 1913376 00:06:05.272 17:22:31 -- common/autotest_common.sh@955 -- # kill 1913376 00:06:05.272 17:22:31 -- common/autotest_common.sh@960 -- # wait 1913376 00:06:05.838 17:22:32 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:05.838 00:06:05.838 real 0m12.342s 00:06:05.838 user 0m48.210s 00:06:05.838 sys 0m1.065s 00:06:05.838 17:22:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.838 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:06:05.838 ************************************ 00:06:05.838 END TEST nvmf_filesystem_no_in_capsule 00:06:05.838 ************************************ 00:06:05.838 17:22:32 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:05.838 17:22:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:05.838 17:22:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.838 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:06:05.838 ************************************ 00:06:05.838 START TEST nvmf_filesystem_in_capsule 00:06:05.838 ************************************ 00:06:05.838 17:22:32 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:06:05.838 17:22:32 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:05.838 17:22:32 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:05.838 17:22:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:05.838 17:22:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:05.838 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:06:05.838 17:22:32 -- nvmf/common.sh@470 -- # nvmfpid=1915044 00:06:05.838 17:22:32 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:05.838 17:22:32 -- nvmf/common.sh@471 -- # waitforlisten 1915044 00:06:05.838 17:22:32 -- common/autotest_common.sh@817 -- # '[' -z 1915044 ']' 00:06:05.838 17:22:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.838 17:22:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:05.838 17:22:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.838 17:22:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:05.838 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:06:05.838 [2024-04-18 17:22:32.242393] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:06:05.838 [2024-04-18 17:22:32.242470] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:05.838 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.838 [2024-04-18 17:22:32.300620] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.098 [2024-04-18 17:22:32.407041] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:06.098 [2024-04-18 17:22:32.407097] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:06.098 [2024-04-18 17:22:32.407110] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:06.098 [2024-04-18 17:22:32.407120] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:06.098 [2024-04-18 17:22:32.407130] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:06.098 [2024-04-18 17:22:32.407482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.098 [2024-04-18 17:22:32.407502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.098 [2024-04-18 17:22:32.407560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.098 [2024-04-18 17:22:32.407563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.098 17:22:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:06.098 17:22:32 -- common/autotest_common.sh@850 -- # return 0 00:06:06.098 17:22:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:06.098 17:22:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:06.098 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:06:06.098 17:22:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:06.098 17:22:32 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:06.098 17:22:32 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:06:06.098 17:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:06.098 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:06:06.098 [2024-04-18 17:22:32.586833] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2122b60/0x2127050) succeed. 00:06:06.098 [2024-04-18 17:22:32.597467] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2124150/0x21686e0) succeed. 00:06:06.358 17:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:06.358 17:22:32 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:06.358 17:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:06.358 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:06:06.617 Malloc1 00:06:06.617 17:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:06.617 17:22:32 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:06.617 17:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:06.617 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:06:06.617 17:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:06.617 17:22:32 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:06.617 17:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:06.617 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:06:06.617 17:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:06.617 17:22:32 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:06.617 17:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:06.617 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:06:06.617 [2024-04-18 17:22:32.935481] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:06.617 17:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:06.617 17:22:32 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:06.617 17:22:32 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:06.617 17:22:32 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:06.617 17:22:32 -- common/autotest_common.sh@1366 -- # local bs 00:06:06.617 17:22:32 -- common/autotest_common.sh@1367 -- # local nb 00:06:06.617 17:22:32 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:06.617 17:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:06.617 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:06:06.617 17:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:06.617 17:22:32 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:06.617 { 00:06:06.617 "name": "Malloc1", 00:06:06.617 "aliases": [ 00:06:06.617 "d8e54f88-aca4-44e6-abf3-f88735eb74ca" 00:06:06.617 ], 00:06:06.617 "product_name": "Malloc disk", 00:06:06.617 "block_size": 512, 00:06:06.617 "num_blocks": 1048576, 00:06:06.617 "uuid": "d8e54f88-aca4-44e6-abf3-f88735eb74ca", 00:06:06.617 "assigned_rate_limits": { 00:06:06.617 "rw_ios_per_sec": 0, 00:06:06.617 "rw_mbytes_per_sec": 0, 00:06:06.617 "r_mbytes_per_sec": 0, 00:06:06.617 "w_mbytes_per_sec": 0 00:06:06.617 }, 00:06:06.617 "claimed": true, 00:06:06.617 "claim_type": "exclusive_write", 00:06:06.617 "zoned": false, 00:06:06.617 "supported_io_types": { 00:06:06.617 "read": true, 00:06:06.617 "write": true, 00:06:06.617 "unmap": true, 00:06:06.617 "write_zeroes": true, 00:06:06.617 "flush": true, 00:06:06.617 "reset": true, 00:06:06.617 "compare": false, 00:06:06.617 "compare_and_write": false, 00:06:06.617 "abort": true, 00:06:06.617 "nvme_admin": false, 00:06:06.617 "nvme_io": false 00:06:06.617 }, 00:06:06.617 "memory_domains": [ 00:06:06.617 { 00:06:06.617 "dma_device_id": "system", 00:06:06.617 "dma_device_type": 1 00:06:06.617 }, 00:06:06.617 { 00:06:06.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.617 "dma_device_type": 2 00:06:06.617 } 00:06:06.617 ], 00:06:06.617 "driver_specific": {} 00:06:06.617 } 00:06:06.617 ]' 00:06:06.617 17:22:32 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:06.617 17:22:32 -- common/autotest_common.sh@1369 -- # bs=512 00:06:06.617 17:22:32 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:06.617 17:22:33 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:06.617 17:22:33 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:06.617 17:22:33 -- common/autotest_common.sh@1374 -- # echo 512 00:06:06.617 17:22:33 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:06.617 17:22:33 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:06:10.843 17:22:36 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:10.843 17:22:36 -- common/autotest_common.sh@1184 -- # local i=0 00:06:10.843 17:22:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:10.843 17:22:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:10.843 17:22:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:12.221 17:22:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:12.221 17:22:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:12.221 17:22:38 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:12.221 17:22:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:12.221 17:22:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:12.221 17:22:38 -- common/autotest_common.sh@1194 -- # return 0 00:06:12.479 17:22:38 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:12.479 17:22:38 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:12.479 17:22:38 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:12.479 17:22:38 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:12.479 17:22:38 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:12.479 17:22:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:12.479 17:22:38 -- setup/common.sh@80 -- # echo 536870912 00:06:12.479 17:22:38 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:12.479 17:22:38 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:12.479 17:22:38 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:12.479 17:22:38 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:12.479 17:22:38 -- target/filesystem.sh@69 -- # partprobe 00:06:12.739 17:22:39 -- target/filesystem.sh@70 -- # sleep 1 00:06:13.676 17:22:40 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:13.676 17:22:40 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:13.676 17:22:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:13.676 17:22:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.676 17:22:40 -- common/autotest_common.sh@10 -- # set +x 00:06:13.676 ************************************ 00:06:13.676 START TEST filesystem_in_capsule_ext4 00:06:13.676 ************************************ 00:06:13.676 17:22:40 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:13.676 17:22:40 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:13.676 17:22:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:13.676 17:22:40 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:13.676 17:22:40 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:13.676 17:22:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:13.676 17:22:40 -- common/autotest_common.sh@914 -- # local i=0 00:06:13.676 17:22:40 -- common/autotest_common.sh@915 -- # local force 00:06:13.676 17:22:40 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:13.676 17:22:40 -- common/autotest_common.sh@918 -- # force=-F 00:06:13.676 17:22:40 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:13.676 mke2fs 1.46.5 (30-Dec-2021) 00:06:13.936 Discarding device blocks: 0/522240 done 00:06:13.936 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:13.936 Filesystem UUID: 94a2f927-b98b-41de-be6a-2ba1e379fe46 00:06:13.936 Superblock backups stored on blocks: 00:06:13.936 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:13.936 00:06:13.936 Allocating group tables: 0/64 done 00:06:13.936 Writing inode tables: 0/64 done 00:06:13.936 Creating journal (8192 blocks): done 00:06:13.936 Writing superblocks and filesystem accounting information: 0/64 done 00:06:13.936 00:06:13.936 17:22:40 -- common/autotest_common.sh@931 -- # return 0 00:06:13.936 17:22:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:13.936 17:22:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:13.936 17:22:40 -- target/filesystem.sh@25 -- # sync 00:06:13.936 17:22:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:13.936 17:22:40 -- target/filesystem.sh@27 -- # sync 00:06:13.936 17:22:40 -- target/filesystem.sh@29 -- # i=0 00:06:13.936 17:22:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:13.936 17:22:40 -- target/filesystem.sh@37 -- # kill -0 1915044 00:06:13.936 17:22:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:13.936 17:22:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:13.936 17:22:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:13.936 17:22:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:13.936 00:06:13.936 real 0m0.147s 00:06:13.936 user 0m0.013s 00:06:13.936 sys 0m0.029s 00:06:13.936 17:22:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.936 17:22:40 -- common/autotest_common.sh@10 -- # set +x 00:06:13.936 ************************************ 00:06:13.936 END TEST filesystem_in_capsule_ext4 00:06:13.936 ************************************ 00:06:13.936 17:22:40 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:13.936 17:22:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:13.936 17:22:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.936 17:22:40 -- common/autotest_common.sh@10 -- # set +x 00:06:13.936 ************************************ 00:06:13.936 START TEST filesystem_in_capsule_btrfs 00:06:13.936 ************************************ 00:06:13.936 17:22:40 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:13.936 17:22:40 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:13.936 17:22:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:13.936 17:22:40 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:13.936 17:22:40 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:13.936 17:22:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:13.936 17:22:40 -- common/autotest_common.sh@914 -- # local i=0 00:06:13.936 17:22:40 -- common/autotest_common.sh@915 -- # local force 00:06:13.936 17:22:40 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:13.936 17:22:40 -- common/autotest_common.sh@920 -- # force=-f 00:06:13.936 17:22:40 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:14.195 btrfs-progs v6.6.2 00:06:14.195 See https://btrfs.readthedocs.io for more information. 00:06:14.195 00:06:14.195 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:14.195 NOTE: several default settings have changed in version 5.15, please make sure 00:06:14.195 this does not affect your deployments: 00:06:14.195 - DUP for metadata (-m dup) 00:06:14.195 - enabled no-holes (-O no-holes) 00:06:14.195 - enabled free-space-tree (-R free-space-tree) 00:06:14.195 00:06:14.195 Label: (null) 00:06:14.196 UUID: 94c4a44a-644e-406f-9e55-9cbc999212c0 00:06:14.196 Node size: 16384 00:06:14.196 Sector size: 4096 00:06:14.196 Filesystem size: 510.00MiB 00:06:14.196 Block group profiles: 00:06:14.196 Data: single 8.00MiB 00:06:14.196 Metadata: DUP 32.00MiB 00:06:14.196 System: DUP 8.00MiB 00:06:14.196 SSD detected: yes 00:06:14.196 Zoned device: no 00:06:14.196 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:14.196 Runtime features: free-space-tree 00:06:14.196 Checksum: crc32c 00:06:14.196 Number of devices: 1 00:06:14.196 Devices: 00:06:14.196 ID SIZE PATH 00:06:14.196 1 510.00MiB /dev/nvme0n1p1 00:06:14.196 00:06:14.196 17:22:40 -- common/autotest_common.sh@931 -- # return 0 00:06:14.196 17:22:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:14.196 17:22:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:14.196 17:22:40 -- target/filesystem.sh@25 -- # sync 00:06:14.196 17:22:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:14.196 17:22:40 -- target/filesystem.sh@27 -- # sync 00:06:14.196 17:22:40 -- target/filesystem.sh@29 -- # i=0 00:06:14.196 17:22:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:14.196 17:22:40 -- target/filesystem.sh@37 -- # kill -0 1915044 00:06:14.196 17:22:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:14.196 17:22:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:14.196 17:22:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:14.196 17:22:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:14.196 00:06:14.196 real 0m0.165s 00:06:14.196 user 0m0.016s 00:06:14.196 sys 0m0.036s 00:06:14.196 17:22:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.196 17:22:40 -- common/autotest_common.sh@10 -- # set +x 00:06:14.196 ************************************ 00:06:14.196 END TEST filesystem_in_capsule_btrfs 00:06:14.196 ************************************ 00:06:14.196 17:22:40 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:14.196 17:22:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:14.196 17:22:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.196 17:22:40 -- common/autotest_common.sh@10 -- # set +x 00:06:14.196 ************************************ 00:06:14.196 START TEST filesystem_in_capsule_xfs 00:06:14.196 ************************************ 00:06:14.196 17:22:40 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:14.196 17:22:40 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:14.196 17:22:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:14.196 17:22:40 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:14.196 17:22:40 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:14.196 17:22:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:14.196 17:22:40 -- common/autotest_common.sh@914 -- # local i=0 00:06:14.196 17:22:40 -- common/autotest_common.sh@915 -- # local force 00:06:14.196 17:22:40 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:14.196 17:22:40 -- common/autotest_common.sh@920 -- # force=-f 00:06:14.196 17:22:40 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:14.455 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:14.455 = sectsz=512 attr=2, projid32bit=1 00:06:14.455 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:14.455 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:14.455 data = bsize=4096 blocks=130560, imaxpct=25 00:06:14.455 = sunit=0 swidth=0 blks 00:06:14.455 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:14.455 log =internal log bsize=4096 blocks=16384, version=2 00:06:14.455 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:14.455 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:14.455 Discarding blocks...Done. 00:06:14.455 17:22:40 -- common/autotest_common.sh@931 -- # return 0 00:06:14.455 17:22:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:14.455 17:22:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:14.455 17:22:40 -- target/filesystem.sh@25 -- # sync 00:06:14.455 17:22:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:14.455 17:22:40 -- target/filesystem.sh@27 -- # sync 00:06:14.455 17:22:40 -- target/filesystem.sh@29 -- # i=0 00:06:14.455 17:22:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:14.455 17:22:40 -- target/filesystem.sh@37 -- # kill -0 1915044 00:06:14.455 17:22:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:14.455 17:22:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:14.455 17:22:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:14.455 17:22:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:14.455 00:06:14.455 real 0m0.163s 00:06:14.455 user 0m0.013s 00:06:14.455 sys 0m0.029s 00:06:14.455 17:22:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.455 17:22:40 -- common/autotest_common.sh@10 -- # set +x 00:06:14.455 ************************************ 00:06:14.455 END TEST filesystem_in_capsule_xfs 00:06:14.455 ************************************ 00:06:14.455 17:22:40 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:14.455 17:22:40 -- target/filesystem.sh@93 -- # sync 00:06:14.455 17:22:40 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:16.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:16.995 17:22:43 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:16.995 17:22:43 -- common/autotest_common.sh@1205 -- # local i=0 00:06:16.995 17:22:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:16.995 17:22:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:16.995 17:22:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:16.995 17:22:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:16.995 17:22:43 -- common/autotest_common.sh@1217 -- # return 0 00:06:16.995 17:22:43 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:16.995 17:22:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:16.995 17:22:43 -- common/autotest_common.sh@10 -- # set +x 00:06:16.995 17:22:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:16.995 17:22:43 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:16.995 17:22:43 -- target/filesystem.sh@101 -- # killprocess 1915044 00:06:16.995 17:22:43 -- common/autotest_common.sh@936 -- # '[' -z 1915044 ']' 00:06:16.995 17:22:43 -- common/autotest_common.sh@940 -- # kill -0 1915044 00:06:16.995 17:22:43 -- common/autotest_common.sh@941 -- # uname 00:06:16.995 17:22:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:16.995 17:22:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1915044 00:06:16.995 17:22:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:16.995 17:22:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:16.995 17:22:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1915044' 00:06:16.995 killing process with pid 1915044 00:06:16.995 17:22:43 -- common/autotest_common.sh@955 -- # kill 1915044 00:06:16.995 17:22:43 -- common/autotest_common.sh@960 -- # wait 1915044 00:06:17.564 17:22:43 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:17.564 00:06:17.564 real 0m11.601s 00:06:17.564 user 0m45.055s 00:06:17.564 sys 0m1.016s 00:06:17.564 17:22:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.564 17:22:43 -- common/autotest_common.sh@10 -- # set +x 00:06:17.564 ************************************ 00:06:17.564 END TEST nvmf_filesystem_in_capsule 00:06:17.564 ************************************ 00:06:17.564 17:22:43 -- target/filesystem.sh@108 -- # nvmftestfini 00:06:17.564 17:22:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:17.564 17:22:43 -- nvmf/common.sh@117 -- # sync 00:06:17.564 17:22:43 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:06:17.564 17:22:43 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:06:17.564 17:22:43 -- nvmf/common.sh@120 -- # set +e 00:06:17.564 17:22:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:17.564 17:22:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:06:17.564 rmmod nvme_rdma 00:06:17.564 rmmod nvme_fabrics 00:06:17.564 17:22:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:17.564 17:22:43 -- nvmf/common.sh@124 -- # set -e 00:06:17.564 17:22:43 -- nvmf/common.sh@125 -- # return 0 00:06:17.564 17:22:43 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:06:17.564 17:22:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:17.564 17:22:43 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:06:17.564 00:06:17.564 real 0m26.488s 00:06:17.564 user 1m34.259s 00:06:17.564 sys 0m3.705s 00:06:17.564 17:22:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.564 17:22:43 -- common/autotest_common.sh@10 -- # set +x 00:06:17.564 ************************************ 00:06:17.564 END TEST nvmf_filesystem 00:06:17.564 ************************************ 00:06:17.564 17:22:43 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:06:17.564 17:22:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:17.564 17:22:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.564 17:22:43 -- common/autotest_common.sh@10 -- # set +x 00:06:17.564 ************************************ 00:06:17.564 START TEST nvmf_discovery 00:06:17.564 ************************************ 00:06:17.564 17:22:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:06:17.564 * Looking for test storage... 00:06:17.564 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:17.565 17:22:44 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:17.565 17:22:44 -- nvmf/common.sh@7 -- # uname -s 00:06:17.565 17:22:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.565 17:22:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.565 17:22:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.565 17:22:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.565 17:22:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.565 17:22:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.565 17:22:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.565 17:22:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.565 17:22:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.565 17:22:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.565 17:22:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:17.565 17:22:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:17.565 17:22:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.565 17:22:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.565 17:22:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:17.565 17:22:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.565 17:22:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:17.565 17:22:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.565 17:22:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.565 17:22:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.565 17:22:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.565 17:22:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.565 17:22:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.565 17:22:44 -- paths/export.sh@5 -- # export PATH 00:06:17.565 17:22:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.565 17:22:44 -- nvmf/common.sh@47 -- # : 0 00:06:17.565 17:22:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:17.565 17:22:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:17.565 17:22:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.565 17:22:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.565 17:22:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.565 17:22:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:17.565 17:22:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:17.565 17:22:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:17.565 17:22:44 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:17.565 17:22:44 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:17.565 17:22:44 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:17.565 17:22:44 -- target/discovery.sh@15 -- # hash nvme 00:06:17.565 17:22:44 -- target/discovery.sh@20 -- # nvmftestinit 00:06:17.565 17:22:44 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:06:17.565 17:22:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:17.565 17:22:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:17.565 17:22:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:17.565 17:22:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:17.565 17:22:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.565 17:22:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:17.565 17:22:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.565 17:22:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:17.565 17:22:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:17.565 17:22:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:17.565 17:22:44 -- common/autotest_common.sh@10 -- # set +x 00:06:19.468 17:22:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:19.468 17:22:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:19.468 17:22:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:19.468 17:22:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:19.468 17:22:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:19.468 17:22:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:19.468 17:22:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:19.468 17:22:45 -- nvmf/common.sh@295 -- # net_devs=() 00:06:19.468 17:22:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:19.468 17:22:45 -- nvmf/common.sh@296 -- # e810=() 00:06:19.468 17:22:45 -- nvmf/common.sh@296 -- # local -ga e810 00:06:19.468 17:22:45 -- nvmf/common.sh@297 -- # x722=() 00:06:19.468 17:22:45 -- nvmf/common.sh@297 -- # local -ga x722 00:06:19.468 17:22:45 -- nvmf/common.sh@298 -- # mlx=() 00:06:19.468 17:22:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:19.468 17:22:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:19.468 17:22:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:19.468 17:22:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:19.468 17:22:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:19.468 17:22:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:19.468 17:22:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:19.468 17:22:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:19.468 17:22:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:19.468 17:22:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:19.468 17:22:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:19.468 17:22:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:19.468 17:22:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:19.468 17:22:45 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:19.468 17:22:45 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:19.468 17:22:45 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:19.468 17:22:45 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:19.468 17:22:45 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:19.468 17:22:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:19.468 17:22:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:19.468 17:22:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:06:19.468 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:06:19.468 17:22:45 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:19.468 17:22:45 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:19.468 17:22:45 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:06:19.468 17:22:45 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:19.468 17:22:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:19.468 17:22:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:06:19.468 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:06:19.468 17:22:45 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:19.468 17:22:45 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:19.468 17:22:45 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:06:19.468 17:22:45 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:19.468 17:22:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:19.468 17:22:45 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:19.468 17:22:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:19.468 17:22:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.469 17:22:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:19.469 17:22:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.469 17:22:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:06:19.469 Found net devices under 0000:09:00.0: mlx_0_0 00:06:19.469 17:22:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.469 17:22:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:19.469 17:22:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.469 17:22:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:19.469 17:22:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.469 17:22:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:06:19.469 Found net devices under 0000:09:00.1: mlx_0_1 00:06:19.469 17:22:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.469 17:22:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:19.469 17:22:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:19.469 17:22:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:19.469 17:22:45 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:06:19.469 17:22:45 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:06:19.469 17:22:45 -- nvmf/common.sh@409 -- # rdma_device_init 00:06:19.469 17:22:45 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:06:19.469 17:22:45 -- nvmf/common.sh@58 -- # uname 00:06:19.469 17:22:46 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:19.469 17:22:46 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:19.469 17:22:46 -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:19.469 17:22:46 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:19.728 17:22:46 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:19.728 17:22:46 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:19.728 17:22:46 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:19.728 17:22:46 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:19.728 17:22:46 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:06:19.728 17:22:46 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:19.728 17:22:46 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:19.728 17:22:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:19.728 17:22:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:19.728 17:22:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:19.728 17:22:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:19.728 17:22:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:19.728 17:22:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:19.728 17:22:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:19.728 17:22:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:19.728 17:22:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:19.728 17:22:46 -- nvmf/common.sh@105 -- # continue 2 00:06:19.728 17:22:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:19.728 17:22:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:19.728 17:22:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:19.728 17:22:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:19.728 17:22:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:19.728 17:22:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:19.728 17:22:46 -- nvmf/common.sh@105 -- # continue 2 00:06:19.728 17:22:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:19.728 17:22:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:19.728 17:22:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:19.728 17:22:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:19.728 17:22:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:19.728 17:22:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:19.728 17:22:46 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:19.728 17:22:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:19.728 17:22:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:19.728 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:19.728 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:06:19.728 altname enp9s0f0np0 00:06:19.728 inet 192.168.100.8/24 scope global mlx_0_0 00:06:19.728 valid_lft forever preferred_lft forever 00:06:19.728 17:22:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:19.728 17:22:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:19.728 17:22:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:19.728 17:22:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:19.728 17:22:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:19.728 17:22:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:19.728 17:22:46 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:19.728 17:22:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:19.728 17:22:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:19.728 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:19.728 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:06:19.728 altname enp9s0f1np1 00:06:19.728 inet 192.168.100.9/24 scope global mlx_0_1 00:06:19.728 valid_lft forever preferred_lft forever 00:06:19.728 17:22:46 -- nvmf/common.sh@411 -- # return 0 00:06:19.728 17:22:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:19.728 17:22:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:19.728 17:22:46 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:06:19.728 17:22:46 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:06:19.728 17:22:46 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:19.728 17:22:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:19.728 17:22:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:19.728 17:22:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:19.728 17:22:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:19.728 17:22:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:19.728 17:22:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:19.728 17:22:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:19.728 17:22:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:19.728 17:22:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:19.728 17:22:46 -- nvmf/common.sh@105 -- # continue 2 00:06:19.728 17:22:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:19.728 17:22:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:19.728 17:22:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:19.728 17:22:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:19.728 17:22:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:19.728 17:22:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:19.728 17:22:46 -- nvmf/common.sh@105 -- # continue 2 00:06:19.728 17:22:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:19.728 17:22:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:19.728 17:22:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:19.728 17:22:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:19.728 17:22:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:19.728 17:22:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:19.728 17:22:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:19.728 17:22:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:19.728 17:22:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:19.728 17:22:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:19.728 17:22:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:19.728 17:22:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:19.728 17:22:46 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:06:19.728 192.168.100.9' 00:06:19.728 17:22:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:06:19.728 192.168.100.9' 00:06:19.728 17:22:46 -- nvmf/common.sh@446 -- # head -n 1 00:06:19.728 17:22:46 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:19.728 17:22:46 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:06:19.728 192.168.100.9' 00:06:19.728 17:22:46 -- nvmf/common.sh@447 -- # tail -n +2 00:06:19.728 17:22:46 -- nvmf/common.sh@447 -- # head -n 1 00:06:19.728 17:22:46 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:19.728 17:22:46 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:06:19.728 17:22:46 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:19.728 17:22:46 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:06:19.728 17:22:46 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:06:19.728 17:22:46 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:06:19.728 17:22:46 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:19.728 17:22:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:19.728 17:22:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:19.728 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:19.728 17:22:46 -- nvmf/common.sh@470 -- # nvmfpid=1918390 00:06:19.728 17:22:46 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:19.728 17:22:46 -- nvmf/common.sh@471 -- # waitforlisten 1918390 00:06:19.728 17:22:46 -- common/autotest_common.sh@817 -- # '[' -z 1918390 ']' 00:06:19.728 17:22:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.728 17:22:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:19.728 17:22:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.728 17:22:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:19.728 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:19.728 [2024-04-18 17:22:46.160011] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:06:19.728 [2024-04-18 17:22:46.160088] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.728 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.728 [2024-04-18 17:22:46.218008] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.986 [2024-04-18 17:22:46.329308] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.986 [2024-04-18 17:22:46.329369] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.986 [2024-04-18 17:22:46.329404] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.986 [2024-04-18 17:22:46.329417] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.986 [2024-04-18 17:22:46.329428] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.986 [2024-04-18 17:22:46.329553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.986 [2024-04-18 17:22:46.329619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.986 [2024-04-18 17:22:46.329685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.986 [2024-04-18 17:22:46.329688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.986 17:22:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:19.986 17:22:46 -- common/autotest_common.sh@850 -- # return 0 00:06:19.986 17:22:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:19.987 17:22:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:19.987 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:19.987 17:22:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:19.987 17:22:46 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:19.987 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.987 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:19.987 [2024-04-18 17:22:46.509044] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2011b60/0x2016050) succeed. 00:06:19.987 [2024-04-18 17:22:46.519854] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2013150/0x20576e0) succeed. 00:06:20.245 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.245 17:22:46 -- target/discovery.sh@26 -- # seq 1 4 00:06:20.245 17:22:46 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:20.245 17:22:46 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:20.245 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.245 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.245 Null1 00:06:20.245 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.245 17:22:46 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:20.245 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.245 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.245 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.245 17:22:46 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:20.245 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.245 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.245 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.245 17:22:46 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:20.245 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.245 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.245 [2024-04-18 17:22:46.701074] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:20.245 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.245 17:22:46 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:20.245 17:22:46 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:20.245 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.245 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.245 Null2 00:06:20.245 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.245 17:22:46 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:20.245 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.245 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.245 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.245 17:22:46 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:20.245 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.245 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.245 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.245 17:22:46 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:06:20.245 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.245 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.245 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.245 17:22:46 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:20.245 17:22:46 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:20.245 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.245 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.245 Null3 00:06:20.245 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.245 17:22:46 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:20.245 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.245 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.245 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.245 17:22:46 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:20.245 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.245 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.245 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.245 17:22:46 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:06:20.245 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.245 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.245 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.245 17:22:46 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:20.245 17:22:46 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:20.245 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.245 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.245 Null4 00:06:20.245 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.245 17:22:46 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:20.245 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.245 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.503 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.503 17:22:46 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:20.503 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.503 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.503 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.503 17:22:46 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:06:20.503 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.503 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.503 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.503 17:22:46 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:20.503 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.503 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.503 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.503 17:22:46 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:06:20.503 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.503 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.503 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.503 17:22:46 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 4420 00:06:20.503 00:06:20.503 Discovery Log Number of Records 6, Generation counter 6 00:06:20.503 =====Discovery Log Entry 0====== 00:06:20.503 trtype: rdma 00:06:20.503 adrfam: ipv4 00:06:20.503 subtype: current discovery subsystem 00:06:20.503 treq: not required 00:06:20.503 portid: 0 00:06:20.503 trsvcid: 4420 00:06:20.503 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:20.503 traddr: 192.168.100.8 00:06:20.503 eflags: explicit discovery connections, duplicate discovery information 00:06:20.503 rdma_prtype: not specified 00:06:20.503 rdma_qptype: connected 00:06:20.503 rdma_cms: rdma-cm 00:06:20.503 rdma_pkey: 0x0000 00:06:20.503 =====Discovery Log Entry 1====== 00:06:20.503 trtype: rdma 00:06:20.503 adrfam: ipv4 00:06:20.503 subtype: nvme subsystem 00:06:20.503 treq: not required 00:06:20.503 portid: 0 00:06:20.503 trsvcid: 4420 00:06:20.503 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:20.503 traddr: 192.168.100.8 00:06:20.504 eflags: none 00:06:20.504 rdma_prtype: not specified 00:06:20.504 rdma_qptype: connected 00:06:20.504 rdma_cms: rdma-cm 00:06:20.504 rdma_pkey: 0x0000 00:06:20.504 =====Discovery Log Entry 2====== 00:06:20.504 trtype: rdma 00:06:20.504 adrfam: ipv4 00:06:20.504 subtype: nvme subsystem 00:06:20.504 treq: not required 00:06:20.504 portid: 0 00:06:20.504 trsvcid: 4420 00:06:20.504 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:20.504 traddr: 192.168.100.8 00:06:20.504 eflags: none 00:06:20.504 rdma_prtype: not specified 00:06:20.504 rdma_qptype: connected 00:06:20.504 rdma_cms: rdma-cm 00:06:20.504 rdma_pkey: 0x0000 00:06:20.504 =====Discovery Log Entry 3====== 00:06:20.504 trtype: rdma 00:06:20.504 adrfam: ipv4 00:06:20.504 subtype: nvme subsystem 00:06:20.504 treq: not required 00:06:20.504 portid: 0 00:06:20.504 trsvcid: 4420 00:06:20.504 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:20.504 traddr: 192.168.100.8 00:06:20.504 eflags: none 00:06:20.504 rdma_prtype: not specified 00:06:20.504 rdma_qptype: connected 00:06:20.504 rdma_cms: rdma-cm 00:06:20.504 rdma_pkey: 0x0000 00:06:20.504 =====Discovery Log Entry 4====== 00:06:20.504 trtype: rdma 00:06:20.504 adrfam: ipv4 00:06:20.504 subtype: nvme subsystem 00:06:20.504 treq: not required 00:06:20.504 portid: 0 00:06:20.504 trsvcid: 4420 00:06:20.504 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:20.504 traddr: 192.168.100.8 00:06:20.504 eflags: none 00:06:20.504 rdma_prtype: not specified 00:06:20.504 rdma_qptype: connected 00:06:20.504 rdma_cms: rdma-cm 00:06:20.504 rdma_pkey: 0x0000 00:06:20.504 =====Discovery Log Entry 5====== 00:06:20.504 trtype: rdma 00:06:20.504 adrfam: ipv4 00:06:20.504 subtype: discovery subsystem referral 00:06:20.504 treq: not required 00:06:20.504 portid: 0 00:06:20.504 trsvcid: 4430 00:06:20.504 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:20.504 traddr: 192.168.100.8 00:06:20.504 eflags: none 00:06:20.504 rdma_prtype: unrecognized 00:06:20.504 rdma_qptype: unrecognized 00:06:20.504 rdma_cms: unrecognized 00:06:20.504 rdma_pkey: 0x0000 00:06:20.504 17:22:46 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:20.504 Perform nvmf subsystem discovery via RPC 00:06:20.504 17:22:46 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:20.504 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.504 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.504 [2024-04-18 17:22:46.893505] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:06:20.504 [ 00:06:20.504 { 00:06:20.504 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:20.504 "subtype": "Discovery", 00:06:20.504 "listen_addresses": [ 00:06:20.504 { 00:06:20.504 "transport": "RDMA", 00:06:20.504 "trtype": "RDMA", 00:06:20.504 "adrfam": "IPv4", 00:06:20.504 "traddr": "192.168.100.8", 00:06:20.504 "trsvcid": "4420" 00:06:20.504 } 00:06:20.504 ], 00:06:20.504 "allow_any_host": true, 00:06:20.504 "hosts": [] 00:06:20.504 }, 00:06:20.504 { 00:06:20.504 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:20.504 "subtype": "NVMe", 00:06:20.504 "listen_addresses": [ 00:06:20.504 { 00:06:20.504 "transport": "RDMA", 00:06:20.504 "trtype": "RDMA", 00:06:20.504 "adrfam": "IPv4", 00:06:20.504 "traddr": "192.168.100.8", 00:06:20.504 "trsvcid": "4420" 00:06:20.504 } 00:06:20.504 ], 00:06:20.504 "allow_any_host": true, 00:06:20.504 "hosts": [], 00:06:20.504 "serial_number": "SPDK00000000000001", 00:06:20.504 "model_number": "SPDK bdev Controller", 00:06:20.504 "max_namespaces": 32, 00:06:20.504 "min_cntlid": 1, 00:06:20.504 "max_cntlid": 65519, 00:06:20.504 "namespaces": [ 00:06:20.504 { 00:06:20.504 "nsid": 1, 00:06:20.504 "bdev_name": "Null1", 00:06:20.504 "name": "Null1", 00:06:20.504 "nguid": "8745A1FF8281431EB5F42C83F369091A", 00:06:20.504 "uuid": "8745a1ff-8281-431e-b5f4-2c83f369091a" 00:06:20.504 } 00:06:20.504 ] 00:06:20.504 }, 00:06:20.504 { 00:06:20.504 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:20.504 "subtype": "NVMe", 00:06:20.504 "listen_addresses": [ 00:06:20.504 { 00:06:20.504 "transport": "RDMA", 00:06:20.504 "trtype": "RDMA", 00:06:20.504 "adrfam": "IPv4", 00:06:20.504 "traddr": "192.168.100.8", 00:06:20.504 "trsvcid": "4420" 00:06:20.504 } 00:06:20.504 ], 00:06:20.504 "allow_any_host": true, 00:06:20.504 "hosts": [], 00:06:20.504 "serial_number": "SPDK00000000000002", 00:06:20.504 "model_number": "SPDK bdev Controller", 00:06:20.504 "max_namespaces": 32, 00:06:20.504 "min_cntlid": 1, 00:06:20.504 "max_cntlid": 65519, 00:06:20.504 "namespaces": [ 00:06:20.504 { 00:06:20.504 "nsid": 1, 00:06:20.504 "bdev_name": "Null2", 00:06:20.504 "name": "Null2", 00:06:20.504 "nguid": "17241C8D2B0D45B9B3308EBE5F93D562", 00:06:20.504 "uuid": "17241c8d-2b0d-45b9-b330-8ebe5f93d562" 00:06:20.504 } 00:06:20.504 ] 00:06:20.504 }, 00:06:20.504 { 00:06:20.504 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:20.504 "subtype": "NVMe", 00:06:20.504 "listen_addresses": [ 00:06:20.504 { 00:06:20.504 "transport": "RDMA", 00:06:20.504 "trtype": "RDMA", 00:06:20.504 "adrfam": "IPv4", 00:06:20.504 "traddr": "192.168.100.8", 00:06:20.504 "trsvcid": "4420" 00:06:20.504 } 00:06:20.504 ], 00:06:20.504 "allow_any_host": true, 00:06:20.504 "hosts": [], 00:06:20.504 "serial_number": "SPDK00000000000003", 00:06:20.504 "model_number": "SPDK bdev Controller", 00:06:20.504 "max_namespaces": 32, 00:06:20.504 "min_cntlid": 1, 00:06:20.504 "max_cntlid": 65519, 00:06:20.504 "namespaces": [ 00:06:20.504 { 00:06:20.504 "nsid": 1, 00:06:20.504 "bdev_name": "Null3", 00:06:20.504 "name": "Null3", 00:06:20.504 "nguid": "B54D63302EDF4F04838A6149364A7335", 00:06:20.504 "uuid": "b54d6330-2edf-4f04-838a-6149364a7335" 00:06:20.504 } 00:06:20.504 ] 00:06:20.504 }, 00:06:20.504 { 00:06:20.504 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:20.504 "subtype": "NVMe", 00:06:20.504 "listen_addresses": [ 00:06:20.504 { 00:06:20.504 "transport": "RDMA", 00:06:20.504 "trtype": "RDMA", 00:06:20.504 "adrfam": "IPv4", 00:06:20.504 "traddr": "192.168.100.8", 00:06:20.504 "trsvcid": "4420" 00:06:20.504 } 00:06:20.504 ], 00:06:20.504 "allow_any_host": true, 00:06:20.504 "hosts": [], 00:06:20.504 "serial_number": "SPDK00000000000004", 00:06:20.504 "model_number": "SPDK bdev Controller", 00:06:20.504 "max_namespaces": 32, 00:06:20.504 "min_cntlid": 1, 00:06:20.504 "max_cntlid": 65519, 00:06:20.504 "namespaces": [ 00:06:20.504 { 00:06:20.504 "nsid": 1, 00:06:20.504 "bdev_name": "Null4", 00:06:20.504 "name": "Null4", 00:06:20.504 "nguid": "B9D1C86984984844A94ECF7E0EC23140", 00:06:20.504 "uuid": "b9d1c869-8498-4844-a94e-cf7e0ec23140" 00:06:20.504 } 00:06:20.504 ] 00:06:20.504 } 00:06:20.504 ] 00:06:20.504 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.504 17:22:46 -- target/discovery.sh@42 -- # seq 1 4 00:06:20.504 17:22:46 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:20.504 17:22:46 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:20.504 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.504 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.504 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.504 17:22:46 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:20.504 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.504 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.504 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.504 17:22:46 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:20.504 17:22:46 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:20.504 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.504 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.504 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.504 17:22:46 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:20.504 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.504 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.504 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.504 17:22:46 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:20.504 17:22:46 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:20.504 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.504 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.504 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.504 17:22:46 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:20.504 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.504 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.504 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.504 17:22:46 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:20.504 17:22:46 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:20.504 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.504 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.504 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.504 17:22:46 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:20.505 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.505 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.505 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.505 17:22:46 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:06:20.505 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.505 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.505 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.505 17:22:46 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:20.505 17:22:46 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:20.505 17:22:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:20.505 17:22:46 -- common/autotest_common.sh@10 -- # set +x 00:06:20.505 17:22:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:20.505 17:22:47 -- target/discovery.sh@49 -- # check_bdevs= 00:06:20.505 17:22:47 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:20.505 17:22:47 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:20.505 17:22:47 -- target/discovery.sh@57 -- # nvmftestfini 00:06:20.505 17:22:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:20.505 17:22:47 -- nvmf/common.sh@117 -- # sync 00:06:20.505 17:22:47 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:06:20.505 17:22:47 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:06:20.505 17:22:47 -- nvmf/common.sh@120 -- # set +e 00:06:20.505 17:22:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:20.505 17:22:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:06:20.505 rmmod nvme_rdma 00:06:20.764 rmmod nvme_fabrics 00:06:20.764 17:22:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:20.764 17:22:47 -- nvmf/common.sh@124 -- # set -e 00:06:20.764 17:22:47 -- nvmf/common.sh@125 -- # return 0 00:06:20.764 17:22:47 -- nvmf/common.sh@478 -- # '[' -n 1918390 ']' 00:06:20.764 17:22:47 -- nvmf/common.sh@479 -- # killprocess 1918390 00:06:20.764 17:22:47 -- common/autotest_common.sh@936 -- # '[' -z 1918390 ']' 00:06:20.765 17:22:47 -- common/autotest_common.sh@940 -- # kill -0 1918390 00:06:20.765 17:22:47 -- common/autotest_common.sh@941 -- # uname 00:06:20.765 17:22:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:20.765 17:22:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1918390 00:06:20.765 17:22:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:20.765 17:22:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:20.765 17:22:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1918390' 00:06:20.765 killing process with pid 1918390 00:06:20.765 17:22:47 -- common/autotest_common.sh@955 -- # kill 1918390 00:06:20.765 [2024-04-18 17:22:47.091490] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:06:20.765 17:22:47 -- common/autotest_common.sh@960 -- # wait 1918390 00:06:21.024 17:22:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:21.024 17:22:47 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:06:21.024 00:06:21.024 real 0m3.469s 00:06:21.024 user 0m4.938s 00:06:21.024 sys 0m1.811s 00:06:21.024 17:22:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.024 17:22:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.024 ************************************ 00:06:21.024 END TEST nvmf_discovery 00:06:21.024 ************************************ 00:06:21.024 17:22:47 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:06:21.024 17:22:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:21.024 17:22:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.024 17:22:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.283 ************************************ 00:06:21.283 START TEST nvmf_referrals 00:06:21.283 ************************************ 00:06:21.283 17:22:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:06:21.283 * Looking for test storage... 00:06:21.283 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:21.283 17:22:47 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:21.283 17:22:47 -- nvmf/common.sh@7 -- # uname -s 00:06:21.283 17:22:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:21.283 17:22:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:21.283 17:22:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:21.283 17:22:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:21.283 17:22:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:21.283 17:22:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:21.283 17:22:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:21.283 17:22:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:21.283 17:22:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:21.283 17:22:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:21.283 17:22:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:21.283 17:22:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:21.283 17:22:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:21.283 17:22:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:21.283 17:22:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:21.283 17:22:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:21.283 17:22:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:21.283 17:22:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:21.283 17:22:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.283 17:22:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.283 17:22:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.283 17:22:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.283 17:22:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.283 17:22:47 -- paths/export.sh@5 -- # export PATH 00:06:21.283 17:22:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.283 17:22:47 -- nvmf/common.sh@47 -- # : 0 00:06:21.283 17:22:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:21.283 17:22:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:21.283 17:22:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:21.283 17:22:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:21.283 17:22:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:21.283 17:22:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:21.283 17:22:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:21.283 17:22:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:21.283 17:22:47 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:21.283 17:22:47 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:21.283 17:22:47 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:21.283 17:22:47 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:21.283 17:22:47 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:21.283 17:22:47 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:21.283 17:22:47 -- target/referrals.sh@37 -- # nvmftestinit 00:06:21.283 17:22:47 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:06:21.283 17:22:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:21.283 17:22:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:21.283 17:22:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:21.283 17:22:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:21.283 17:22:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.283 17:22:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:21.283 17:22:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:21.283 17:22:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:21.283 17:22:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:21.283 17:22:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:21.283 17:22:47 -- common/autotest_common.sh@10 -- # set +x 00:06:23.191 17:22:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:23.191 17:22:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:23.191 17:22:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:23.191 17:22:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:23.191 17:22:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:23.191 17:22:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:23.191 17:22:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:23.191 17:22:49 -- nvmf/common.sh@295 -- # net_devs=() 00:06:23.191 17:22:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:23.191 17:22:49 -- nvmf/common.sh@296 -- # e810=() 00:06:23.191 17:22:49 -- nvmf/common.sh@296 -- # local -ga e810 00:06:23.191 17:22:49 -- nvmf/common.sh@297 -- # x722=() 00:06:23.191 17:22:49 -- nvmf/common.sh@297 -- # local -ga x722 00:06:23.191 17:22:49 -- nvmf/common.sh@298 -- # mlx=() 00:06:23.191 17:22:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:23.191 17:22:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:23.191 17:22:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:23.191 17:22:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:23.191 17:22:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:23.192 17:22:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:23.192 17:22:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:23.192 17:22:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:23.192 17:22:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:23.192 17:22:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:23.192 17:22:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:23.192 17:22:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:23.192 17:22:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:23.192 17:22:49 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:23.192 17:22:49 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:23.192 17:22:49 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:23.192 17:22:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:23.192 17:22:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:23.192 17:22:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:06:23.192 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:06:23.192 17:22:49 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:23.192 17:22:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:23.192 17:22:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:06:23.192 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:06:23.192 17:22:49 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:23.192 17:22:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:23.192 17:22:49 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:23.192 17:22:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.192 17:22:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:23.192 17:22:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.192 17:22:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:06:23.192 Found net devices under 0000:09:00.0: mlx_0_0 00:06:23.192 17:22:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.192 17:22:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:23.192 17:22:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:23.192 17:22:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:23.192 17:22:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:23.192 17:22:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:06:23.192 Found net devices under 0000:09:00.1: mlx_0_1 00:06:23.192 17:22:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:23.192 17:22:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:23.192 17:22:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:23.192 17:22:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@409 -- # rdma_device_init 00:06:23.192 17:22:49 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:06:23.192 17:22:49 -- nvmf/common.sh@58 -- # uname 00:06:23.192 17:22:49 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:23.192 17:22:49 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:23.192 17:22:49 -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:23.192 17:22:49 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:23.192 17:22:49 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:23.192 17:22:49 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:23.192 17:22:49 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:23.192 17:22:49 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:23.192 17:22:49 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:06:23.192 17:22:49 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:23.192 17:22:49 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:23.192 17:22:49 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:23.192 17:22:49 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:23.192 17:22:49 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:23.192 17:22:49 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:23.192 17:22:49 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:23.192 17:22:49 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:23.192 17:22:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:23.192 17:22:49 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:23.192 17:22:49 -- nvmf/common.sh@105 -- # continue 2 00:06:23.192 17:22:49 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:23.192 17:22:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:23.192 17:22:49 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:23.192 17:22:49 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:23.192 17:22:49 -- nvmf/common.sh@105 -- # continue 2 00:06:23.192 17:22:49 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:23.192 17:22:49 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:23.192 17:22:49 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:23.192 17:22:49 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:23.192 17:22:49 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:23.192 17:22:49 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:23.192 17:22:49 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:23.192 17:22:49 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:23.192 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:23.192 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:06:23.192 altname enp9s0f0np0 00:06:23.192 inet 192.168.100.8/24 scope global mlx_0_0 00:06:23.192 valid_lft forever preferred_lft forever 00:06:23.192 17:22:49 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:23.192 17:22:49 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:23.192 17:22:49 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:23.192 17:22:49 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:23.192 17:22:49 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:23.192 17:22:49 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:23.192 17:22:49 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:23.192 17:22:49 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:23.192 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:23.192 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:06:23.192 altname enp9s0f1np1 00:06:23.192 inet 192.168.100.9/24 scope global mlx_0_1 00:06:23.192 valid_lft forever preferred_lft forever 00:06:23.192 17:22:49 -- nvmf/common.sh@411 -- # return 0 00:06:23.192 17:22:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:23.192 17:22:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:23.192 17:22:49 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:06:23.192 17:22:49 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:23.192 17:22:49 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:23.192 17:22:49 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:23.192 17:22:49 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:23.192 17:22:49 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:23.192 17:22:49 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:23.192 17:22:49 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:23.192 17:22:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:23.192 17:22:49 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:23.192 17:22:49 -- nvmf/common.sh@105 -- # continue 2 00:06:23.192 17:22:49 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:23.192 17:22:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:23.192 17:22:49 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:23.192 17:22:49 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:23.192 17:22:49 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:23.192 17:22:49 -- nvmf/common.sh@105 -- # continue 2 00:06:23.192 17:22:49 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:23.192 17:22:49 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:23.192 17:22:49 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:23.192 17:22:49 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:23.192 17:22:49 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:23.192 17:22:49 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:23.192 17:22:49 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:23.192 17:22:49 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:23.192 17:22:49 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:23.192 17:22:49 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:23.192 17:22:49 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:23.192 17:22:49 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:23.192 17:22:49 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:06:23.192 192.168.100.9' 00:06:23.192 17:22:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:06:23.192 192.168.100.9' 00:06:23.192 17:22:49 -- nvmf/common.sh@446 -- # head -n 1 00:06:23.192 17:22:49 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:23.193 17:22:49 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:06:23.193 192.168.100.9' 00:06:23.193 17:22:49 -- nvmf/common.sh@447 -- # tail -n +2 00:06:23.193 17:22:49 -- nvmf/common.sh@447 -- # head -n 1 00:06:23.193 17:22:49 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:23.193 17:22:49 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:06:23.193 17:22:49 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:23.193 17:22:49 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:06:23.193 17:22:49 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:06:23.193 17:22:49 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:06:23.193 17:22:49 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:23.193 17:22:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:23.193 17:22:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:23.193 17:22:49 -- common/autotest_common.sh@10 -- # set +x 00:06:23.193 17:22:49 -- nvmf/common.sh@470 -- # nvmfpid=1920220 00:06:23.193 17:22:49 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:23.193 17:22:49 -- nvmf/common.sh@471 -- # waitforlisten 1920220 00:06:23.193 17:22:49 -- common/autotest_common.sh@817 -- # '[' -z 1920220 ']' 00:06:23.193 17:22:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.193 17:22:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:23.193 17:22:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.193 17:22:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:23.193 17:22:49 -- common/autotest_common.sh@10 -- # set +x 00:06:23.193 [2024-04-18 17:22:49.710215] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:06:23.193 [2024-04-18 17:22:49.710295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.453 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.453 [2024-04-18 17:22:49.768310] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.453 [2024-04-18 17:22:49.878123] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:23.453 [2024-04-18 17:22:49.878200] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:23.453 [2024-04-18 17:22:49.878214] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:23.453 [2024-04-18 17:22:49.878225] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:23.453 [2024-04-18 17:22:49.878235] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:23.453 [2024-04-18 17:22:49.878353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.453 [2024-04-18 17:22:49.878419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.453 [2024-04-18 17:22:49.878485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.453 [2024-04-18 17:22:49.878488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.712 17:22:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:23.712 17:22:50 -- common/autotest_common.sh@850 -- # return 0 00:06:23.712 17:22:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:23.712 17:22:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:23.712 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:23.712 17:22:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.712 17:22:50 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:23.712 17:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.712 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:23.712 [2024-04-18 17:22:50.055746] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x82cb60/0x831050) succeed. 00:06:23.712 [2024-04-18 17:22:50.066944] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x82e150/0x8726e0) succeed. 00:06:23.712 17:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.712 17:22:50 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:06:23.712 17:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.712 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:23.712 [2024-04-18 17:22:50.221848] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:06:23.712 17:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.712 17:22:50 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:06:23.712 17:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.712 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:23.712 17:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.712 17:22:50 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:06:23.712 17:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.712 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:23.712 17:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.712 17:22:50 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:06:23.712 17:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.712 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:23.712 17:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.712 17:22:50 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.712 17:22:50 -- target/referrals.sh@48 -- # jq length 00:06:23.712 17:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.712 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:23.973 17:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.973 17:22:50 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:23.973 17:22:50 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:23.973 17:22:50 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:23.973 17:22:50 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.973 17:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.973 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:23.973 17:22:50 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:23.973 17:22:50 -- target/referrals.sh@21 -- # sort 00:06:23.973 17:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.973 17:22:50 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:23.973 17:22:50 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:23.973 17:22:50 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:23.973 17:22:50 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:23.973 17:22:50 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:23.973 17:22:50 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:23.973 17:22:50 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:23.973 17:22:50 -- target/referrals.sh@26 -- # sort 00:06:23.973 17:22:50 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:23.973 17:22:50 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:23.973 17:22:50 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:06:23.973 17:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.973 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:23.973 17:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.973 17:22:50 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:06:23.973 17:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.973 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:23.973 17:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.973 17:22:50 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:06:23.973 17:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.973 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:23.973 17:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.973 17:22:50 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:23.973 17:22:50 -- target/referrals.sh@56 -- # jq length 00:06:23.973 17:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:23.973 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:23.973 17:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:23.973 17:22:50 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:23.973 17:22:50 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:23.973 17:22:50 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:23.973 17:22:50 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:23.973 17:22:50 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:23.973 17:22:50 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:23.973 17:22:50 -- target/referrals.sh@26 -- # sort 00:06:24.234 17:22:50 -- target/referrals.sh@26 -- # echo 00:06:24.234 17:22:50 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:24.234 17:22:50 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:06:24.234 17:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:24.234 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:24.234 17:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:24.234 17:22:50 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:24.234 17:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:24.234 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:24.234 17:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:24.234 17:22:50 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:24.234 17:22:50 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:24.234 17:22:50 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:24.234 17:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:24.234 17:22:50 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:24.234 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:24.234 17:22:50 -- target/referrals.sh@21 -- # sort 00:06:24.234 17:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:24.234 17:22:50 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:24.234 17:22:50 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:24.234 17:22:50 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:24.234 17:22:50 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:24.234 17:22:50 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:24.234 17:22:50 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:24.234 17:22:50 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:24.234 17:22:50 -- target/referrals.sh@26 -- # sort 00:06:24.234 17:22:50 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:24.234 17:22:50 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:24.234 17:22:50 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:24.234 17:22:50 -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:24.234 17:22:50 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:24.234 17:22:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:24.234 17:22:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:24.234 17:22:50 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:24.234 17:22:50 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:24.234 17:22:50 -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:24.234 17:22:50 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:24.234 17:22:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:24.234 17:22:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:24.493 17:22:50 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:24.493 17:22:50 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:24.493 17:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:24.493 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:24.493 17:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:24.493 17:22:50 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:24.493 17:22:50 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:24.493 17:22:50 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:24.493 17:22:50 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:24.493 17:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:24.494 17:22:50 -- common/autotest_common.sh@10 -- # set +x 00:06:24.494 17:22:50 -- target/referrals.sh@21 -- # sort 00:06:24.494 17:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:24.494 17:22:50 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:24.494 17:22:50 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:24.494 17:22:50 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:24.494 17:22:50 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:24.494 17:22:50 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:24.494 17:22:50 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:24.494 17:22:50 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:24.494 17:22:50 -- target/referrals.sh@26 -- # sort 00:06:24.494 17:22:50 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:24.494 17:22:50 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:24.494 17:22:50 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:24.494 17:22:50 -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:24.494 17:22:50 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:24.494 17:22:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:24.494 17:22:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:24.494 17:22:50 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:24.494 17:22:50 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:24.494 17:22:50 -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:24.494 17:22:50 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:24.494 17:22:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:24.494 17:22:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:24.494 17:22:51 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:24.494 17:22:51 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:24.494 17:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:24.494 17:22:51 -- common/autotest_common.sh@10 -- # set +x 00:06:24.753 17:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:24.753 17:22:51 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:24.753 17:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:24.753 17:22:51 -- target/referrals.sh@82 -- # jq length 00:06:24.753 17:22:51 -- common/autotest_common.sh@10 -- # set +x 00:06:24.753 17:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:24.753 17:22:51 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:24.753 17:22:51 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:24.753 17:22:51 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:24.753 17:22:51 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:24.753 17:22:51 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 8009 -o json 00:06:24.753 17:22:51 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:24.753 17:22:51 -- target/referrals.sh@26 -- # sort 00:06:24.753 17:22:51 -- target/referrals.sh@26 -- # echo 00:06:24.753 17:22:51 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:24.753 17:22:51 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:24.753 17:22:51 -- target/referrals.sh@86 -- # nvmftestfini 00:06:24.753 17:22:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:24.753 17:22:51 -- nvmf/common.sh@117 -- # sync 00:06:24.753 17:22:51 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:06:24.753 17:22:51 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:06:24.753 17:22:51 -- nvmf/common.sh@120 -- # set +e 00:06:24.753 17:22:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:24.753 17:22:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:06:24.753 rmmod nvme_rdma 00:06:24.753 rmmod nvme_fabrics 00:06:24.753 17:22:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:24.753 17:22:51 -- nvmf/common.sh@124 -- # set -e 00:06:24.753 17:22:51 -- nvmf/common.sh@125 -- # return 0 00:06:24.753 17:22:51 -- nvmf/common.sh@478 -- # '[' -n 1920220 ']' 00:06:24.753 17:22:51 -- nvmf/common.sh@479 -- # killprocess 1920220 00:06:24.753 17:22:51 -- common/autotest_common.sh@936 -- # '[' -z 1920220 ']' 00:06:24.753 17:22:51 -- common/autotest_common.sh@940 -- # kill -0 1920220 00:06:24.753 17:22:51 -- common/autotest_common.sh@941 -- # uname 00:06:24.753 17:22:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:24.753 17:22:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1920220 00:06:24.753 17:22:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:24.753 17:22:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:24.753 17:22:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1920220' 00:06:24.753 killing process with pid 1920220 00:06:24.753 17:22:51 -- common/autotest_common.sh@955 -- # kill 1920220 00:06:24.753 17:22:51 -- common/autotest_common.sh@960 -- # wait 1920220 00:06:25.321 17:22:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:25.321 17:22:51 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:06:25.321 00:06:25.321 real 0m3.975s 00:06:25.321 user 0m7.686s 00:06:25.321 sys 0m1.941s 00:06:25.321 17:22:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.321 17:22:51 -- common/autotest_common.sh@10 -- # set +x 00:06:25.321 ************************************ 00:06:25.321 END TEST nvmf_referrals 00:06:25.321 ************************************ 00:06:25.322 17:22:51 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:06:25.322 17:22:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:25.322 17:22:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.322 17:22:51 -- common/autotest_common.sh@10 -- # set +x 00:06:25.322 ************************************ 00:06:25.322 START TEST nvmf_connect_disconnect 00:06:25.322 ************************************ 00:06:25.322 17:22:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:06:25.322 * Looking for test storage... 00:06:25.322 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:25.322 17:22:51 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.322 17:22:51 -- nvmf/common.sh@7 -- # uname -s 00:06:25.322 17:22:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.322 17:22:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.322 17:22:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.322 17:22:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.322 17:22:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.322 17:22:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.322 17:22:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.322 17:22:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.322 17:22:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.322 17:22:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.322 17:22:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:25.322 17:22:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:25.322 17:22:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.322 17:22:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.322 17:22:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.322 17:22:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.322 17:22:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:25.322 17:22:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.322 17:22:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.322 17:22:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.322 17:22:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.322 17:22:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.322 17:22:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.322 17:22:51 -- paths/export.sh@5 -- # export PATH 00:06:25.322 17:22:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.322 17:22:51 -- nvmf/common.sh@47 -- # : 0 00:06:25.322 17:22:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.322 17:22:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.322 17:22:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.322 17:22:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.322 17:22:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.322 17:22:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.322 17:22:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.322 17:22:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.322 17:22:51 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:25.322 17:22:51 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:25.322 17:22:51 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:25.322 17:22:51 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:06:25.322 17:22:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:25.322 17:22:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:25.322 17:22:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:25.322 17:22:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:25.322 17:22:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.322 17:22:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:25.322 17:22:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.322 17:22:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:25.322 17:22:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:25.322 17:22:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:25.322 17:22:51 -- common/autotest_common.sh@10 -- # set +x 00:06:27.857 17:22:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:27.857 17:22:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:27.857 17:22:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:27.857 17:22:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:27.857 17:22:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:27.857 17:22:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:27.857 17:22:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:27.857 17:22:53 -- nvmf/common.sh@295 -- # net_devs=() 00:06:27.857 17:22:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:27.857 17:22:53 -- nvmf/common.sh@296 -- # e810=() 00:06:27.857 17:22:53 -- nvmf/common.sh@296 -- # local -ga e810 00:06:27.857 17:22:53 -- nvmf/common.sh@297 -- # x722=() 00:06:27.857 17:22:53 -- nvmf/common.sh@297 -- # local -ga x722 00:06:27.857 17:22:53 -- nvmf/common.sh@298 -- # mlx=() 00:06:27.857 17:22:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:27.857 17:22:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:27.857 17:22:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:27.857 17:22:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:27.857 17:22:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:27.857 17:22:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:27.857 17:22:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:27.857 17:22:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:27.857 17:22:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:27.857 17:22:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:27.857 17:22:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:27.857 17:22:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:27.857 17:22:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:27.857 17:22:53 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:27.857 17:22:53 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:27.857 17:22:53 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:27.857 17:22:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:27.857 17:22:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:27.857 17:22:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:06:27.857 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:06:27.857 17:22:53 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:27.857 17:22:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:27.857 17:22:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:06:27.857 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:06:27.857 17:22:53 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:27.857 17:22:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:27.857 17:22:53 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:27.857 17:22:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.857 17:22:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:27.857 17:22:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.857 17:22:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:06:27.857 Found net devices under 0000:09:00.0: mlx_0_0 00:06:27.857 17:22:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.857 17:22:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:27.857 17:22:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.857 17:22:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:27.857 17:22:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.857 17:22:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:06:27.857 Found net devices under 0000:09:00.1: mlx_0_1 00:06:27.857 17:22:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.857 17:22:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:27.857 17:22:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:27.857 17:22:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@409 -- # rdma_device_init 00:06:27.857 17:22:53 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:06:27.857 17:22:53 -- nvmf/common.sh@58 -- # uname 00:06:27.857 17:22:53 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:27.857 17:22:53 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:27.857 17:22:53 -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:27.857 17:22:53 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:27.857 17:22:53 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:27.857 17:22:53 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:27.857 17:22:53 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:27.857 17:22:53 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:27.857 17:22:53 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:06:27.857 17:22:53 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:27.857 17:22:53 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:27.857 17:22:53 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:27.857 17:22:53 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:27.857 17:22:53 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:27.857 17:22:53 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:27.857 17:22:53 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:27.857 17:22:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:27.857 17:22:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.857 17:22:53 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:27.857 17:22:53 -- nvmf/common.sh@105 -- # continue 2 00:06:27.857 17:22:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:27.857 17:22:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.857 17:22:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.857 17:22:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:27.857 17:22:53 -- nvmf/common.sh@105 -- # continue 2 00:06:27.857 17:22:53 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:27.857 17:22:53 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:27.857 17:22:53 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:27.857 17:22:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:27.857 17:22:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:27.857 17:22:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:27.857 17:22:53 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:27.857 17:22:53 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:27.857 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:27.857 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:06:27.857 altname enp9s0f0np0 00:06:27.857 inet 192.168.100.8/24 scope global mlx_0_0 00:06:27.857 valid_lft forever preferred_lft forever 00:06:27.857 17:22:53 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:27.857 17:22:53 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:27.857 17:22:53 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:27.857 17:22:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:27.857 17:22:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:27.857 17:22:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:27.857 17:22:53 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:27.857 17:22:53 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:27.857 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:27.857 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:06:27.857 altname enp9s0f1np1 00:06:27.857 inet 192.168.100.9/24 scope global mlx_0_1 00:06:27.857 valid_lft forever preferred_lft forever 00:06:27.857 17:22:53 -- nvmf/common.sh@411 -- # return 0 00:06:27.857 17:22:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:27.857 17:22:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:27.857 17:22:53 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:06:27.857 17:22:53 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:06:27.857 17:22:53 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:27.857 17:22:53 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:27.858 17:22:53 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:27.858 17:22:53 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:27.858 17:22:53 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:27.858 17:22:53 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:27.858 17:22:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:27.858 17:22:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.858 17:22:53 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:27.858 17:22:53 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:27.858 17:22:53 -- nvmf/common.sh@105 -- # continue 2 00:06:27.858 17:22:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:27.858 17:22:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.858 17:22:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:27.858 17:22:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:27.858 17:22:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:27.858 17:22:53 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:27.858 17:22:53 -- nvmf/common.sh@105 -- # continue 2 00:06:27.858 17:22:53 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:27.858 17:22:53 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:27.858 17:22:53 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:27.858 17:22:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:27.858 17:22:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:27.858 17:22:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:27.858 17:22:53 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:27.858 17:22:53 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:27.858 17:22:53 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:27.858 17:22:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:27.858 17:22:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:27.858 17:22:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:27.858 17:22:53 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:06:27.858 192.168.100.9' 00:06:27.858 17:22:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:06:27.858 192.168.100.9' 00:06:27.858 17:22:53 -- nvmf/common.sh@446 -- # head -n 1 00:06:27.858 17:22:53 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:27.858 17:22:53 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:06:27.858 192.168.100.9' 00:06:27.858 17:22:53 -- nvmf/common.sh@447 -- # tail -n +2 00:06:27.858 17:22:53 -- nvmf/common.sh@447 -- # head -n 1 00:06:27.858 17:22:53 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:27.858 17:22:53 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:06:27.858 17:22:53 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:27.858 17:22:53 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:06:27.858 17:22:53 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:06:27.858 17:22:53 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:06:27.858 17:22:53 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:27.858 17:22:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:27.858 17:22:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:27.858 17:22:53 -- common/autotest_common.sh@10 -- # set +x 00:06:27.858 17:22:53 -- nvmf/common.sh@470 -- # nvmfpid=1922242 00:06:27.858 17:22:53 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:27.858 17:22:53 -- nvmf/common.sh@471 -- # waitforlisten 1922242 00:06:27.858 17:22:53 -- common/autotest_common.sh@817 -- # '[' -z 1922242 ']' 00:06:27.858 17:22:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.858 17:22:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:27.858 17:22:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.858 17:22:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:27.858 17:22:53 -- common/autotest_common.sh@10 -- # set +x 00:06:27.858 [2024-04-18 17:22:53.961799] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:06:27.858 [2024-04-18 17:22:53.961872] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.858 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.858 [2024-04-18 17:22:54.018898] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.858 [2024-04-18 17:22:54.133064] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:27.858 [2024-04-18 17:22:54.133130] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:27.858 [2024-04-18 17:22:54.133155] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.858 [2024-04-18 17:22:54.133168] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.858 [2024-04-18 17:22:54.133181] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:27.858 [2024-04-18 17:22:54.133279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.858 [2024-04-18 17:22:54.133334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.858 [2024-04-18 17:22:54.133449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.858 [2024-04-18 17:22:54.133452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.858 17:22:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:27.858 17:22:54 -- common/autotest_common.sh@850 -- # return 0 00:06:27.858 17:22:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:27.858 17:22:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:27.858 17:22:54 -- common/autotest_common.sh@10 -- # set +x 00:06:27.858 17:22:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:27.858 17:22:54 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:06:27.858 17:22:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:27.858 17:22:54 -- common/autotest_common.sh@10 -- # set +x 00:06:27.858 [2024-04-18 17:22:54.289192] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:27.858 [2024-04-18 17:22:54.311217] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fb3b60/0x1fb8050) succeed. 00:06:27.858 [2024-04-18 17:22:54.321774] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fb5150/0x1ff96e0) succeed. 00:06:28.118 17:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:28.118 17:22:54 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:28.118 17:22:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.118 17:22:54 -- common/autotest_common.sh@10 -- # set +x 00:06:28.118 17:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:28.118 17:22:54 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:28.118 17:22:54 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:28.118 17:22:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.118 17:22:54 -- common/autotest_common.sh@10 -- # set +x 00:06:28.118 17:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:28.118 17:22:54 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:28.118 17:22:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.118 17:22:54 -- common/autotest_common.sh@10 -- # set +x 00:06:28.118 17:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:28.118 17:22:54 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:28.118 17:22:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:28.118 17:22:54 -- common/autotest_common.sh@10 -- # set +x 00:06:28.118 [2024-04-18 17:22:54.482680] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:28.118 17:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:28.118 17:22:54 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:28.118 17:22:54 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:28.118 17:22:54 -- target/connect_disconnect.sh@34 -- # set +x 00:06:36.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:44.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:52.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:59.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.201 17:23:32 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:07.201 17:23:32 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:07.201 17:23:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:07.201 17:23:32 -- nvmf/common.sh@117 -- # sync 00:07:07.201 17:23:32 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:07.201 17:23:32 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:07.201 17:23:32 -- nvmf/common.sh@120 -- # set +e 00:07:07.201 17:23:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:07.201 17:23:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:07.201 rmmod nvme_rdma 00:07:07.201 rmmod nvme_fabrics 00:07:07.201 17:23:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:07.201 17:23:33 -- nvmf/common.sh@124 -- # set -e 00:07:07.201 17:23:33 -- nvmf/common.sh@125 -- # return 0 00:07:07.201 17:23:33 -- nvmf/common.sh@478 -- # '[' -n 1922242 ']' 00:07:07.201 17:23:33 -- nvmf/common.sh@479 -- # killprocess 1922242 00:07:07.201 17:23:33 -- common/autotest_common.sh@936 -- # '[' -z 1922242 ']' 00:07:07.201 17:23:33 -- common/autotest_common.sh@940 -- # kill -0 1922242 00:07:07.201 17:23:33 -- common/autotest_common.sh@941 -- # uname 00:07:07.201 17:23:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:07.201 17:23:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1922242 00:07:07.201 17:23:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:07.201 17:23:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:07.201 17:23:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1922242' 00:07:07.201 killing process with pid 1922242 00:07:07.201 17:23:33 -- common/autotest_common.sh@955 -- # kill 1922242 00:07:07.201 17:23:33 -- common/autotest_common.sh@960 -- # wait 1922242 00:07:07.201 17:23:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:07.201 17:23:33 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:07:07.201 00:07:07.201 real 0m41.749s 00:07:07.201 user 2m36.888s 00:07:07.201 sys 0m2.607s 00:07:07.201 17:23:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:07.201 17:23:33 -- common/autotest_common.sh@10 -- # set +x 00:07:07.201 ************************************ 00:07:07.201 END TEST nvmf_connect_disconnect 00:07:07.201 ************************************ 00:07:07.201 17:23:33 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:07:07.201 17:23:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:07.201 17:23:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.201 17:23:33 -- common/autotest_common.sh@10 -- # set +x 00:07:07.201 ************************************ 00:07:07.201 START TEST nvmf_multitarget 00:07:07.201 ************************************ 00:07:07.201 17:23:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:07:07.201 * Looking for test storage... 00:07:07.201 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:07.201 17:23:33 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.201 17:23:33 -- nvmf/common.sh@7 -- # uname -s 00:07:07.201 17:23:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.201 17:23:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.201 17:23:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.201 17:23:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.201 17:23:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.201 17:23:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.201 17:23:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.201 17:23:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.201 17:23:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.201 17:23:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.201 17:23:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:07.201 17:23:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:07.201 17:23:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.201 17:23:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.201 17:23:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.201 17:23:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.201 17:23:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:07.201 17:23:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.201 17:23:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.201 17:23:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.201 17:23:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.201 17:23:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.201 17:23:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.201 17:23:33 -- paths/export.sh@5 -- # export PATH 00:07:07.201 17:23:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.201 17:23:33 -- nvmf/common.sh@47 -- # : 0 00:07:07.201 17:23:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:07.201 17:23:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:07.201 17:23:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.201 17:23:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.201 17:23:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.201 17:23:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:07.201 17:23:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:07.201 17:23:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:07.201 17:23:33 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:07.201 17:23:33 -- target/multitarget.sh@15 -- # nvmftestinit 00:07:07.201 17:23:33 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:07.201 17:23:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.201 17:23:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:07.201 17:23:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:07.201 17:23:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:07.201 17:23:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.201 17:23:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.201 17:23:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.201 17:23:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:07.201 17:23:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:07.201 17:23:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:07.201 17:23:33 -- common/autotest_common.sh@10 -- # set +x 00:07:09.108 17:23:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:09.108 17:23:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:09.108 17:23:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:09.108 17:23:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:09.108 17:23:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:09.108 17:23:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:09.108 17:23:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:09.108 17:23:35 -- nvmf/common.sh@295 -- # net_devs=() 00:07:09.108 17:23:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:09.108 17:23:35 -- nvmf/common.sh@296 -- # e810=() 00:07:09.108 17:23:35 -- nvmf/common.sh@296 -- # local -ga e810 00:07:09.108 17:23:35 -- nvmf/common.sh@297 -- # x722=() 00:07:09.108 17:23:35 -- nvmf/common.sh@297 -- # local -ga x722 00:07:09.108 17:23:35 -- nvmf/common.sh@298 -- # mlx=() 00:07:09.108 17:23:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:09.108 17:23:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:09.108 17:23:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:09.108 17:23:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:09.108 17:23:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:09.108 17:23:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:09.108 17:23:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:09.108 17:23:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:09.108 17:23:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:09.108 17:23:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:09.108 17:23:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:09.108 17:23:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:09.108 17:23:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:09.108 17:23:35 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:09.108 17:23:35 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:09.108 17:23:35 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:09.108 17:23:35 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:09.108 17:23:35 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:09.108 17:23:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:09.108 17:23:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.108 17:23:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:07:09.108 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:07:09.108 17:23:35 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:09.108 17:23:35 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:09.108 17:23:35 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:07:09.108 17:23:35 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:09.108 17:23:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.108 17:23:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:07:09.108 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:07:09.108 17:23:35 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:09.108 17:23:35 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:09.108 17:23:35 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:07:09.108 17:23:35 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:09.108 17:23:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:09.108 17:23:35 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:09.108 17:23:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.108 17:23:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.108 17:23:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:09.108 17:23:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.108 17:23:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:07:09.108 Found net devices under 0000:09:00.0: mlx_0_0 00:07:09.108 17:23:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.108 17:23:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.108 17:23:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.108 17:23:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:09.108 17:23:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.108 17:23:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:07:09.108 Found net devices under 0000:09:00.1: mlx_0_1 00:07:09.108 17:23:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.108 17:23:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:09.108 17:23:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:09.108 17:23:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:09.108 17:23:35 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:09.108 17:23:35 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:09.108 17:23:35 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:09.108 17:23:35 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:09.108 17:23:35 -- nvmf/common.sh@58 -- # uname 00:07:09.108 17:23:35 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:09.108 17:23:35 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:09.108 17:23:35 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:09.108 17:23:35 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:09.109 17:23:35 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:09.109 17:23:35 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:09.109 17:23:35 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:09.109 17:23:35 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:09.109 17:23:35 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:09.109 17:23:35 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:09.368 17:23:35 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:09.368 17:23:35 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:09.368 17:23:35 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:09.368 17:23:35 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:09.368 17:23:35 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:09.368 17:23:35 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:09.368 17:23:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:09.368 17:23:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.368 17:23:35 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:09.368 17:23:35 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:09.368 17:23:35 -- nvmf/common.sh@105 -- # continue 2 00:07:09.368 17:23:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:09.368 17:23:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.368 17:23:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:09.368 17:23:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.368 17:23:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:09.368 17:23:35 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:09.368 17:23:35 -- nvmf/common.sh@105 -- # continue 2 00:07:09.368 17:23:35 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:09.368 17:23:35 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:09.368 17:23:35 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:09.368 17:23:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:09.368 17:23:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:09.368 17:23:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:09.368 17:23:35 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:09.368 17:23:35 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:09.368 17:23:35 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:09.368 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:09.368 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:07:09.368 altname enp9s0f0np0 00:07:09.368 inet 192.168.100.8/24 scope global mlx_0_0 00:07:09.368 valid_lft forever preferred_lft forever 00:07:09.368 17:23:35 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:09.368 17:23:35 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:09.368 17:23:35 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:09.368 17:23:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:09.368 17:23:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:09.368 17:23:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:09.368 17:23:35 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:09.368 17:23:35 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:09.368 17:23:35 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:09.368 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:09.368 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:07:09.368 altname enp9s0f1np1 00:07:09.368 inet 192.168.100.9/24 scope global mlx_0_1 00:07:09.368 valid_lft forever preferred_lft forever 00:07:09.368 17:23:35 -- nvmf/common.sh@411 -- # return 0 00:07:09.368 17:23:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:09.368 17:23:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:09.368 17:23:35 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:09.368 17:23:35 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:09.368 17:23:35 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:09.368 17:23:35 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:09.368 17:23:35 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:09.368 17:23:35 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:09.368 17:23:35 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:09.368 17:23:35 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:09.368 17:23:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:09.368 17:23:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.368 17:23:35 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:09.368 17:23:35 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:09.368 17:23:35 -- nvmf/common.sh@105 -- # continue 2 00:07:09.368 17:23:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:09.368 17:23:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.368 17:23:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:09.369 17:23:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:09.369 17:23:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:09.369 17:23:35 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:09.369 17:23:35 -- nvmf/common.sh@105 -- # continue 2 00:07:09.369 17:23:35 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:09.369 17:23:35 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:09.369 17:23:35 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:09.369 17:23:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:09.369 17:23:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:09.369 17:23:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:09.369 17:23:35 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:09.369 17:23:35 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:09.369 17:23:35 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:09.369 17:23:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:09.369 17:23:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:09.369 17:23:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:09.369 17:23:35 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:09.369 192.168.100.9' 00:07:09.369 17:23:35 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:09.369 192.168.100.9' 00:07:09.369 17:23:35 -- nvmf/common.sh@446 -- # head -n 1 00:07:09.369 17:23:35 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:09.369 17:23:35 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:09.369 192.168.100.9' 00:07:09.369 17:23:35 -- nvmf/common.sh@447 -- # tail -n +2 00:07:09.369 17:23:35 -- nvmf/common.sh@447 -- # head -n 1 00:07:09.369 17:23:35 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:09.369 17:23:35 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:09.369 17:23:35 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:09.369 17:23:35 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:09.369 17:23:35 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:09.369 17:23:35 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:09.369 17:23:35 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:09.369 17:23:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:09.369 17:23:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:09.369 17:23:35 -- common/autotest_common.sh@10 -- # set +x 00:07:09.369 17:23:35 -- nvmf/common.sh@470 -- # nvmfpid=1928624 00:07:09.369 17:23:35 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:09.369 17:23:35 -- nvmf/common.sh@471 -- # waitforlisten 1928624 00:07:09.369 17:23:35 -- common/autotest_common.sh@817 -- # '[' -z 1928624 ']' 00:07:09.369 17:23:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.369 17:23:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:09.369 17:23:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.369 17:23:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:09.369 17:23:35 -- common/autotest_common.sh@10 -- # set +x 00:07:09.369 [2024-04-18 17:23:35.789776] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:07:09.369 [2024-04-18 17:23:35.789857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.369 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.369 [2024-04-18 17:23:35.847127] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.627 [2024-04-18 17:23:35.958804] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.627 [2024-04-18 17:23:35.958864] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.627 [2024-04-18 17:23:35.958877] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.628 [2024-04-18 17:23:35.958889] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.628 [2024-04-18 17:23:35.958898] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.628 [2024-04-18 17:23:35.958951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.628 [2024-04-18 17:23:35.959011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.628 [2024-04-18 17:23:35.959077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.628 [2024-04-18 17:23:35.959080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.628 17:23:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:09.628 17:23:36 -- common/autotest_common.sh@850 -- # return 0 00:07:09.628 17:23:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:09.628 17:23:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:09.628 17:23:36 -- common/autotest_common.sh@10 -- # set +x 00:07:09.628 17:23:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.628 17:23:36 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:09.628 17:23:36 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:09.628 17:23:36 -- target/multitarget.sh@21 -- # jq length 00:07:09.887 17:23:36 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:09.887 17:23:36 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:09.887 "nvmf_tgt_1" 00:07:09.887 17:23:36 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:09.887 "nvmf_tgt_2" 00:07:10.145 17:23:36 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:10.145 17:23:36 -- target/multitarget.sh@28 -- # jq length 00:07:10.145 17:23:36 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:10.145 17:23:36 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:10.145 true 00:07:10.145 17:23:36 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:10.447 true 00:07:10.447 17:23:36 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:10.447 17:23:36 -- target/multitarget.sh@35 -- # jq length 00:07:10.447 17:23:36 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:10.447 17:23:36 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:10.447 17:23:36 -- target/multitarget.sh@41 -- # nvmftestfini 00:07:10.447 17:23:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:10.447 17:23:36 -- nvmf/common.sh@117 -- # sync 00:07:10.447 17:23:36 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:10.447 17:23:36 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:10.447 17:23:36 -- nvmf/common.sh@120 -- # set +e 00:07:10.447 17:23:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:10.447 17:23:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:10.447 rmmod nvme_rdma 00:07:10.447 rmmod nvme_fabrics 00:07:10.447 17:23:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:10.447 17:23:36 -- nvmf/common.sh@124 -- # set -e 00:07:10.447 17:23:36 -- nvmf/common.sh@125 -- # return 0 00:07:10.447 17:23:36 -- nvmf/common.sh@478 -- # '[' -n 1928624 ']' 00:07:10.447 17:23:36 -- nvmf/common.sh@479 -- # killprocess 1928624 00:07:10.447 17:23:36 -- common/autotest_common.sh@936 -- # '[' -z 1928624 ']' 00:07:10.447 17:23:36 -- common/autotest_common.sh@940 -- # kill -0 1928624 00:07:10.447 17:23:36 -- common/autotest_common.sh@941 -- # uname 00:07:10.447 17:23:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:10.447 17:23:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1928624 00:07:10.447 17:23:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:10.447 17:23:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:10.447 17:23:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1928624' 00:07:10.447 killing process with pid 1928624 00:07:10.447 17:23:36 -- common/autotest_common.sh@955 -- # kill 1928624 00:07:10.447 17:23:36 -- common/autotest_common.sh@960 -- # wait 1928624 00:07:10.711 17:23:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:10.711 17:23:37 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:07:10.711 00:07:10.711 real 0m3.658s 00:07:10.711 user 0m6.290s 00:07:10.711 sys 0m1.920s 00:07:10.711 17:23:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.711 17:23:37 -- common/autotest_common.sh@10 -- # set +x 00:07:10.711 ************************************ 00:07:10.711 END TEST nvmf_multitarget 00:07:10.711 ************************************ 00:07:10.711 17:23:37 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:07:10.711 17:23:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:10.711 17:23:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.711 17:23:37 -- common/autotest_common.sh@10 -- # set +x 00:07:10.969 ************************************ 00:07:10.969 START TEST nvmf_rpc 00:07:10.969 ************************************ 00:07:10.969 17:23:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:07:10.969 * Looking for test storage... 00:07:10.969 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:10.969 17:23:37 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.969 17:23:37 -- nvmf/common.sh@7 -- # uname -s 00:07:10.969 17:23:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.969 17:23:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.969 17:23:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.969 17:23:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.969 17:23:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.969 17:23:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.969 17:23:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.969 17:23:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.969 17:23:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.969 17:23:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.969 17:23:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:10.969 17:23:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:10.969 17:23:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.969 17:23:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.969 17:23:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.969 17:23:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.969 17:23:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:10.969 17:23:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.969 17:23:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.969 17:23:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.969 17:23:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.969 17:23:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.969 17:23:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.969 17:23:37 -- paths/export.sh@5 -- # export PATH 00:07:10.969 17:23:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.969 17:23:37 -- nvmf/common.sh@47 -- # : 0 00:07:10.969 17:23:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:10.969 17:23:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:10.969 17:23:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.969 17:23:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.969 17:23:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.969 17:23:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:10.969 17:23:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:10.969 17:23:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:10.969 17:23:37 -- target/rpc.sh@11 -- # loops=5 00:07:10.969 17:23:37 -- target/rpc.sh@23 -- # nvmftestinit 00:07:10.969 17:23:37 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:10.969 17:23:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.969 17:23:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:10.969 17:23:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:10.969 17:23:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:10.969 17:23:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.969 17:23:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.969 17:23:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.969 17:23:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:10.969 17:23:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:10.969 17:23:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:10.969 17:23:37 -- common/autotest_common.sh@10 -- # set +x 00:07:12.876 17:23:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:12.876 17:23:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:12.876 17:23:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:12.876 17:23:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:12.876 17:23:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:12.876 17:23:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:12.876 17:23:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:12.876 17:23:39 -- nvmf/common.sh@295 -- # net_devs=() 00:07:12.876 17:23:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:12.876 17:23:39 -- nvmf/common.sh@296 -- # e810=() 00:07:12.876 17:23:39 -- nvmf/common.sh@296 -- # local -ga e810 00:07:12.876 17:23:39 -- nvmf/common.sh@297 -- # x722=() 00:07:12.876 17:23:39 -- nvmf/common.sh@297 -- # local -ga x722 00:07:12.876 17:23:39 -- nvmf/common.sh@298 -- # mlx=() 00:07:12.876 17:23:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:12.876 17:23:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.876 17:23:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.876 17:23:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.876 17:23:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.876 17:23:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.876 17:23:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.876 17:23:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.876 17:23:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.876 17:23:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.876 17:23:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.876 17:23:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.876 17:23:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:12.876 17:23:39 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:12.876 17:23:39 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:12.876 17:23:39 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:12.876 17:23:39 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:12.876 17:23:39 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:12.876 17:23:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:12.876 17:23:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.876 17:23:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:07:12.876 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:07:12.876 17:23:39 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:12.876 17:23:39 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:12.876 17:23:39 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:07:12.876 17:23:39 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:12.876 17:23:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.876 17:23:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:07:12.876 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:07:12.876 17:23:39 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:12.876 17:23:39 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:12.876 17:23:39 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:07:12.876 17:23:39 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:12.876 17:23:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:12.876 17:23:39 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:12.876 17:23:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.876 17:23:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.876 17:23:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:12.876 17:23:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.876 17:23:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:07:12.876 Found net devices under 0000:09:00.0: mlx_0_0 00:07:12.876 17:23:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.876 17:23:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.876 17:23:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.876 17:23:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:12.876 17:23:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.876 17:23:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:07:12.876 Found net devices under 0000:09:00.1: mlx_0_1 00:07:12.876 17:23:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.876 17:23:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:12.876 17:23:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:12.876 17:23:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:12.876 17:23:39 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:12.876 17:23:39 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:12.876 17:23:39 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:12.876 17:23:39 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:12.876 17:23:39 -- nvmf/common.sh@58 -- # uname 00:07:12.876 17:23:39 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:12.876 17:23:39 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:12.876 17:23:39 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:12.876 17:23:39 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:12.876 17:23:39 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:12.876 17:23:39 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:12.876 17:23:39 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:12.876 17:23:39 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:12.876 17:23:39 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:12.876 17:23:39 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:12.876 17:23:39 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:12.876 17:23:39 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:12.876 17:23:39 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:12.876 17:23:39 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:12.876 17:23:39 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:12.876 17:23:39 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:12.876 17:23:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:12.876 17:23:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:12.876 17:23:39 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:12.876 17:23:39 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:12.876 17:23:39 -- nvmf/common.sh@105 -- # continue 2 00:07:12.876 17:23:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:12.876 17:23:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:12.876 17:23:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:12.876 17:23:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:12.876 17:23:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:12.876 17:23:39 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:12.876 17:23:39 -- nvmf/common.sh@105 -- # continue 2 00:07:12.876 17:23:39 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:12.876 17:23:39 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:12.876 17:23:39 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:12.876 17:23:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:12.876 17:23:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:12.876 17:23:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:12.876 17:23:39 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:12.876 17:23:39 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:12.876 17:23:39 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:12.876 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:12.876 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:07:12.876 altname enp9s0f0np0 00:07:12.876 inet 192.168.100.8/24 scope global mlx_0_0 00:07:12.876 valid_lft forever preferred_lft forever 00:07:12.876 17:23:39 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:12.876 17:23:39 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:12.876 17:23:39 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:12.876 17:23:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:12.876 17:23:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:12.876 17:23:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:13.137 17:23:39 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:13.137 17:23:39 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:13.137 17:23:39 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:13.137 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:13.137 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:07:13.137 altname enp9s0f1np1 00:07:13.137 inet 192.168.100.9/24 scope global mlx_0_1 00:07:13.137 valid_lft forever preferred_lft forever 00:07:13.137 17:23:39 -- nvmf/common.sh@411 -- # return 0 00:07:13.137 17:23:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:13.137 17:23:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:13.137 17:23:39 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:13.137 17:23:39 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:13.137 17:23:39 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:13.137 17:23:39 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:13.137 17:23:39 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:13.137 17:23:39 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:13.137 17:23:39 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:13.137 17:23:39 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:13.137 17:23:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:13.137 17:23:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.137 17:23:39 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:13.137 17:23:39 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:13.137 17:23:39 -- nvmf/common.sh@105 -- # continue 2 00:07:13.137 17:23:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:13.137 17:23:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.137 17:23:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:13.137 17:23:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.137 17:23:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:13.137 17:23:39 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:13.137 17:23:39 -- nvmf/common.sh@105 -- # continue 2 00:07:13.137 17:23:39 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:13.137 17:23:39 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:13.137 17:23:39 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:13.137 17:23:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:13.137 17:23:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:13.137 17:23:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:13.137 17:23:39 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:13.137 17:23:39 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:13.137 17:23:39 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:13.137 17:23:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:13.137 17:23:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:13.137 17:23:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:13.137 17:23:39 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:13.137 192.168.100.9' 00:07:13.137 17:23:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:13.137 192.168.100.9' 00:07:13.137 17:23:39 -- nvmf/common.sh@446 -- # head -n 1 00:07:13.137 17:23:39 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:13.137 17:23:39 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:13.137 192.168.100.9' 00:07:13.137 17:23:39 -- nvmf/common.sh@447 -- # tail -n +2 00:07:13.137 17:23:39 -- nvmf/common.sh@447 -- # head -n 1 00:07:13.137 17:23:39 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:13.137 17:23:39 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:13.137 17:23:39 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:13.137 17:23:39 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:13.137 17:23:39 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:13.137 17:23:39 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:13.137 17:23:39 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:13.137 17:23:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:13.137 17:23:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:13.137 17:23:39 -- common/autotest_common.sh@10 -- # set +x 00:07:13.137 17:23:39 -- nvmf/common.sh@470 -- # nvmfpid=1930465 00:07:13.137 17:23:39 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:13.137 17:23:39 -- nvmf/common.sh@471 -- # waitforlisten 1930465 00:07:13.137 17:23:39 -- common/autotest_common.sh@817 -- # '[' -z 1930465 ']' 00:07:13.137 17:23:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.137 17:23:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:13.137 17:23:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.137 17:23:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:13.137 17:23:39 -- common/autotest_common.sh@10 -- # set +x 00:07:13.137 [2024-04-18 17:23:39.521274] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:07:13.138 [2024-04-18 17:23:39.521349] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.138 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.138 [2024-04-18 17:23:39.578824] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.396 [2024-04-18 17:23:39.689112] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.396 [2024-04-18 17:23:39.689173] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.396 [2024-04-18 17:23:39.689187] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.396 [2024-04-18 17:23:39.689198] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.396 [2024-04-18 17:23:39.689207] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.396 [2024-04-18 17:23:39.689265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.396 [2024-04-18 17:23:39.689338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.396 [2024-04-18 17:23:39.689409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.396 [2024-04-18 17:23:39.689404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.396 17:23:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:13.396 17:23:39 -- common/autotest_common.sh@850 -- # return 0 00:07:13.396 17:23:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:13.396 17:23:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:13.396 17:23:39 -- common/autotest_common.sh@10 -- # set +x 00:07:13.396 17:23:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.396 17:23:39 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:13.396 17:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.396 17:23:39 -- common/autotest_common.sh@10 -- # set +x 00:07:13.396 17:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.396 17:23:39 -- target/rpc.sh@26 -- # stats='{ 00:07:13.396 "tick_rate": 2700000000, 00:07:13.396 "poll_groups": [ 00:07:13.396 { 00:07:13.396 "name": "nvmf_tgt_poll_group_0", 00:07:13.396 "admin_qpairs": 0, 00:07:13.396 "io_qpairs": 0, 00:07:13.396 "current_admin_qpairs": 0, 00:07:13.397 "current_io_qpairs": 0, 00:07:13.397 "pending_bdev_io": 0, 00:07:13.397 "completed_nvme_io": 0, 00:07:13.397 "transports": [] 00:07:13.397 }, 00:07:13.397 { 00:07:13.397 "name": "nvmf_tgt_poll_group_1", 00:07:13.397 "admin_qpairs": 0, 00:07:13.397 "io_qpairs": 0, 00:07:13.397 "current_admin_qpairs": 0, 00:07:13.397 "current_io_qpairs": 0, 00:07:13.397 "pending_bdev_io": 0, 00:07:13.397 "completed_nvme_io": 0, 00:07:13.397 "transports": [] 00:07:13.397 }, 00:07:13.397 { 00:07:13.397 "name": "nvmf_tgt_poll_group_2", 00:07:13.397 "admin_qpairs": 0, 00:07:13.397 "io_qpairs": 0, 00:07:13.397 "current_admin_qpairs": 0, 00:07:13.397 "current_io_qpairs": 0, 00:07:13.397 "pending_bdev_io": 0, 00:07:13.397 "completed_nvme_io": 0, 00:07:13.397 "transports": [] 00:07:13.397 }, 00:07:13.397 { 00:07:13.397 "name": "nvmf_tgt_poll_group_3", 00:07:13.397 "admin_qpairs": 0, 00:07:13.397 "io_qpairs": 0, 00:07:13.397 "current_admin_qpairs": 0, 00:07:13.397 "current_io_qpairs": 0, 00:07:13.397 "pending_bdev_io": 0, 00:07:13.397 "completed_nvme_io": 0, 00:07:13.397 "transports": [] 00:07:13.397 } 00:07:13.397 ] 00:07:13.397 }' 00:07:13.397 17:23:39 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:13.397 17:23:39 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:13.397 17:23:39 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:13.397 17:23:39 -- target/rpc.sh@15 -- # wc -l 00:07:13.397 17:23:39 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:13.397 17:23:39 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:13.397 17:23:39 -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:13.397 17:23:39 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:13.397 17:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.397 17:23:39 -- common/autotest_common.sh@10 -- # set +x 00:07:13.655 [2024-04-18 17:23:39.953187] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a00ba0/0x1a05090) succeed. 00:07:13.655 [2024-04-18 17:23:39.963696] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a02190/0x1a46720) succeed. 00:07:13.655 17:23:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.655 17:23:40 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:13.655 17:23:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.655 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:07:13.655 17:23:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.655 17:23:40 -- target/rpc.sh@33 -- # stats='{ 00:07:13.655 "tick_rate": 2700000000, 00:07:13.655 "poll_groups": [ 00:07:13.655 { 00:07:13.655 "name": "nvmf_tgt_poll_group_0", 00:07:13.655 "admin_qpairs": 0, 00:07:13.655 "io_qpairs": 0, 00:07:13.655 "current_admin_qpairs": 0, 00:07:13.655 "current_io_qpairs": 0, 00:07:13.655 "pending_bdev_io": 0, 00:07:13.655 "completed_nvme_io": 0, 00:07:13.655 "transports": [ 00:07:13.655 { 00:07:13.655 "trtype": "RDMA", 00:07:13.655 "pending_data_buffer": 0, 00:07:13.655 "devices": [ 00:07:13.655 { 00:07:13.655 "name": "mlx5_0", 00:07:13.655 "polls": 21545, 00:07:13.655 "idle_polls": 21545, 00:07:13.655 "completions": 0, 00:07:13.655 "requests": 0, 00:07:13.655 "request_latency": 0, 00:07:13.655 "pending_free_request": 0, 00:07:13.655 "pending_rdma_read": 0, 00:07:13.655 "pending_rdma_write": 0, 00:07:13.655 "pending_rdma_send": 0, 00:07:13.655 "total_send_wrs": 0, 00:07:13.655 "send_doorbell_updates": 0, 00:07:13.655 "total_recv_wrs": 4096, 00:07:13.655 "recv_doorbell_updates": 1 00:07:13.655 }, 00:07:13.655 { 00:07:13.655 "name": "mlx5_1", 00:07:13.655 "polls": 21545, 00:07:13.655 "idle_polls": 21545, 00:07:13.655 "completions": 0, 00:07:13.655 "requests": 0, 00:07:13.655 "request_latency": 0, 00:07:13.655 "pending_free_request": 0, 00:07:13.655 "pending_rdma_read": 0, 00:07:13.655 "pending_rdma_write": 0, 00:07:13.655 "pending_rdma_send": 0, 00:07:13.655 "total_send_wrs": 0, 00:07:13.655 "send_doorbell_updates": 0, 00:07:13.655 "total_recv_wrs": 4096, 00:07:13.655 "recv_doorbell_updates": 1 00:07:13.655 } 00:07:13.655 ] 00:07:13.655 } 00:07:13.655 ] 00:07:13.655 }, 00:07:13.655 { 00:07:13.655 "name": "nvmf_tgt_poll_group_1", 00:07:13.656 "admin_qpairs": 0, 00:07:13.656 "io_qpairs": 0, 00:07:13.656 "current_admin_qpairs": 0, 00:07:13.656 "current_io_qpairs": 0, 00:07:13.656 "pending_bdev_io": 0, 00:07:13.656 "completed_nvme_io": 0, 00:07:13.656 "transports": [ 00:07:13.656 { 00:07:13.656 "trtype": "RDMA", 00:07:13.656 "pending_data_buffer": 0, 00:07:13.656 "devices": [ 00:07:13.656 { 00:07:13.656 "name": "mlx5_0", 00:07:13.656 "polls": 14247, 00:07:13.656 "idle_polls": 14247, 00:07:13.656 "completions": 0, 00:07:13.656 "requests": 0, 00:07:13.656 "request_latency": 0, 00:07:13.656 "pending_free_request": 0, 00:07:13.656 "pending_rdma_read": 0, 00:07:13.656 "pending_rdma_write": 0, 00:07:13.656 "pending_rdma_send": 0, 00:07:13.656 "total_send_wrs": 0, 00:07:13.656 "send_doorbell_updates": 0, 00:07:13.656 "total_recv_wrs": 4096, 00:07:13.656 "recv_doorbell_updates": 1 00:07:13.656 }, 00:07:13.656 { 00:07:13.656 "name": "mlx5_1", 00:07:13.656 "polls": 14247, 00:07:13.656 "idle_polls": 14247, 00:07:13.656 "completions": 0, 00:07:13.656 "requests": 0, 00:07:13.656 "request_latency": 0, 00:07:13.656 "pending_free_request": 0, 00:07:13.656 "pending_rdma_read": 0, 00:07:13.656 "pending_rdma_write": 0, 00:07:13.656 "pending_rdma_send": 0, 00:07:13.656 "total_send_wrs": 0, 00:07:13.656 "send_doorbell_updates": 0, 00:07:13.656 "total_recv_wrs": 4096, 00:07:13.656 "recv_doorbell_updates": 1 00:07:13.656 } 00:07:13.656 ] 00:07:13.656 } 00:07:13.656 ] 00:07:13.656 }, 00:07:13.656 { 00:07:13.656 "name": "nvmf_tgt_poll_group_2", 00:07:13.656 "admin_qpairs": 0, 00:07:13.656 "io_qpairs": 0, 00:07:13.656 "current_admin_qpairs": 0, 00:07:13.656 "current_io_qpairs": 0, 00:07:13.656 "pending_bdev_io": 0, 00:07:13.656 "completed_nvme_io": 0, 00:07:13.656 "transports": [ 00:07:13.656 { 00:07:13.656 "trtype": "RDMA", 00:07:13.656 "pending_data_buffer": 0, 00:07:13.656 "devices": [ 00:07:13.656 { 00:07:13.656 "name": "mlx5_0", 00:07:13.656 "polls": 7498, 00:07:13.656 "idle_polls": 7498, 00:07:13.656 "completions": 0, 00:07:13.656 "requests": 0, 00:07:13.656 "request_latency": 0, 00:07:13.656 "pending_free_request": 0, 00:07:13.656 "pending_rdma_read": 0, 00:07:13.656 "pending_rdma_write": 0, 00:07:13.656 "pending_rdma_send": 0, 00:07:13.656 "total_send_wrs": 0, 00:07:13.656 "send_doorbell_updates": 0, 00:07:13.656 "total_recv_wrs": 4096, 00:07:13.656 "recv_doorbell_updates": 1 00:07:13.656 }, 00:07:13.656 { 00:07:13.656 "name": "mlx5_1", 00:07:13.656 "polls": 7498, 00:07:13.656 "idle_polls": 7498, 00:07:13.656 "completions": 0, 00:07:13.656 "requests": 0, 00:07:13.656 "request_latency": 0, 00:07:13.656 "pending_free_request": 0, 00:07:13.656 "pending_rdma_read": 0, 00:07:13.656 "pending_rdma_write": 0, 00:07:13.656 "pending_rdma_send": 0, 00:07:13.656 "total_send_wrs": 0, 00:07:13.656 "send_doorbell_updates": 0, 00:07:13.656 "total_recv_wrs": 4096, 00:07:13.656 "recv_doorbell_updates": 1 00:07:13.656 } 00:07:13.656 ] 00:07:13.656 } 00:07:13.656 ] 00:07:13.656 }, 00:07:13.656 { 00:07:13.656 "name": "nvmf_tgt_poll_group_3", 00:07:13.656 "admin_qpairs": 0, 00:07:13.656 "io_qpairs": 0, 00:07:13.656 "current_admin_qpairs": 0, 00:07:13.656 "current_io_qpairs": 0, 00:07:13.656 "pending_bdev_io": 0, 00:07:13.656 "completed_nvme_io": 0, 00:07:13.656 "transports": [ 00:07:13.656 { 00:07:13.656 "trtype": "RDMA", 00:07:13.656 "pending_data_buffer": 0, 00:07:13.656 "devices": [ 00:07:13.656 { 00:07:13.656 "name": "mlx5_0", 00:07:13.656 "polls": 986, 00:07:13.656 "idle_polls": 986, 00:07:13.656 "completions": 0, 00:07:13.656 "requests": 0, 00:07:13.656 "request_latency": 0, 00:07:13.656 "pending_free_request": 0, 00:07:13.656 "pending_rdma_read": 0, 00:07:13.656 "pending_rdma_write": 0, 00:07:13.656 "pending_rdma_send": 0, 00:07:13.656 "total_send_wrs": 0, 00:07:13.656 "send_doorbell_updates": 0, 00:07:13.656 "total_recv_wrs": 4096, 00:07:13.656 "recv_doorbell_updates": 1 00:07:13.656 }, 00:07:13.656 { 00:07:13.656 "name": "mlx5_1", 00:07:13.656 "polls": 986, 00:07:13.656 "idle_polls": 986, 00:07:13.656 "completions": 0, 00:07:13.656 "requests": 0, 00:07:13.656 "request_latency": 0, 00:07:13.656 "pending_free_request": 0, 00:07:13.656 "pending_rdma_read": 0, 00:07:13.656 "pending_rdma_write": 0, 00:07:13.656 "pending_rdma_send": 0, 00:07:13.656 "total_send_wrs": 0, 00:07:13.656 "send_doorbell_updates": 0, 00:07:13.656 "total_recv_wrs": 4096, 00:07:13.656 "recv_doorbell_updates": 1 00:07:13.656 } 00:07:13.656 ] 00:07:13.656 } 00:07:13.656 ] 00:07:13.656 } 00:07:13.656 ] 00:07:13.656 }' 00:07:13.656 17:23:40 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:13.656 17:23:40 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:13.656 17:23:40 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:13.656 17:23:40 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:13.656 17:23:40 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:13.656 17:23:40 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:13.656 17:23:40 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:13.656 17:23:40 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:13.656 17:23:40 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:13.915 17:23:40 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:13.915 17:23:40 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:07:13.915 17:23:40 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:07:13.915 17:23:40 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:07:13.915 17:23:40 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:07:13.915 17:23:40 -- target/rpc.sh@15 -- # wc -l 00:07:13.915 17:23:40 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:07:13.915 17:23:40 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:07:13.915 17:23:40 -- target/rpc.sh@41 -- # transport_type=RDMA 00:07:13.915 17:23:40 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:07:13.915 17:23:40 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:07:13.915 17:23:40 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:07:13.915 17:23:40 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:07:13.915 17:23:40 -- target/rpc.sh@15 -- # wc -l 00:07:13.915 17:23:40 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:07:13.915 17:23:40 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:13.915 17:23:40 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:13.915 17:23:40 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:13.915 17:23:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.915 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:07:13.915 Malloc1 00:07:13.915 17:23:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.915 17:23:40 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:13.915 17:23:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.915 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:07:13.915 17:23:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.915 17:23:40 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:13.915 17:23:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.915 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:07:13.915 17:23:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.915 17:23:40 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:13.915 17:23:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.915 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:07:13.915 17:23:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.915 17:23:40 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:13.915 17:23:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.915 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:07:13.915 [2024-04-18 17:23:40.386856] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:13.915 17:23:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.915 17:23:40 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 192.168.100.8 -s 4420 00:07:13.915 17:23:40 -- common/autotest_common.sh@638 -- # local es=0 00:07:13.915 17:23:40 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 192.168.100.8 -s 4420 00:07:13.915 17:23:40 -- common/autotest_common.sh@626 -- # local arg=nvme 00:07:13.915 17:23:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:13.915 17:23:40 -- common/autotest_common.sh@630 -- # type -t nvme 00:07:13.915 17:23:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:13.915 17:23:40 -- common/autotest_common.sh@632 -- # type -P nvme 00:07:13.915 17:23:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:13.915 17:23:40 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:07:13.915 17:23:40 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:07:13.915 17:23:40 -- common/autotest_common.sh@641 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 192.168.100.8 -s 4420 00:07:13.916 [2024-04-18 17:23:40.416718] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:13.916 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:13.916 could not add new controller: failed to write to nvme-fabrics device 00:07:13.916 17:23:40 -- common/autotest_common.sh@641 -- # es=1 00:07:13.916 17:23:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:13.916 17:23:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:13.916 17:23:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:13.916 17:23:40 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:13.916 17:23:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.916 17:23:40 -- common/autotest_common.sh@10 -- # set +x 00:07:13.916 17:23:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.916 17:23:40 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:18.108 17:23:43 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:18.108 17:23:43 -- common/autotest_common.sh@1184 -- # local i=0 00:07:18.108 17:23:43 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:18.108 17:23:43 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:18.108 17:23:43 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:19.484 17:23:45 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:19.484 17:23:45 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:19.484 17:23:45 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:19.484 17:23:45 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:19.484 17:23:45 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:19.484 17:23:45 -- common/autotest_common.sh@1194 -- # return 0 00:07:19.484 17:23:45 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:22.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:22.036 17:23:48 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:22.036 17:23:48 -- common/autotest_common.sh@1205 -- # local i=0 00:07:22.036 17:23:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:22.036 17:23:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.036 17:23:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:22.036 17:23:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.036 17:23:48 -- common/autotest_common.sh@1217 -- # return 0 00:07:22.036 17:23:48 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:22.036 17:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:22.036 17:23:48 -- common/autotest_common.sh@10 -- # set +x 00:07:22.036 17:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:22.036 17:23:48 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:22.036 17:23:48 -- common/autotest_common.sh@638 -- # local es=0 00:07:22.036 17:23:48 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:22.036 17:23:48 -- common/autotest_common.sh@626 -- # local arg=nvme 00:07:22.036 17:23:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:22.036 17:23:48 -- common/autotest_common.sh@630 -- # type -t nvme 00:07:22.036 17:23:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:22.036 17:23:48 -- common/autotest_common.sh@632 -- # type -P nvme 00:07:22.036 17:23:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:22.036 17:23:48 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:07:22.036 17:23:48 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:07:22.036 17:23:48 -- common/autotest_common.sh@641 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:22.036 [2024-04-18 17:23:48.059030] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:22.036 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:22.036 could not add new controller: failed to write to nvme-fabrics device 00:07:22.036 17:23:48 -- common/autotest_common.sh@641 -- # es=1 00:07:22.036 17:23:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:22.036 17:23:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:22.036 17:23:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:22.036 17:23:48 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:22.036 17:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:22.036 17:23:48 -- common/autotest_common.sh@10 -- # set +x 00:07:22.036 17:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:22.036 17:23:48 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:25.359 17:23:51 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:25.359 17:23:51 -- common/autotest_common.sh@1184 -- # local i=0 00:07:25.359 17:23:51 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:25.359 17:23:51 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:25.359 17:23:51 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:27.264 17:23:53 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:27.264 17:23:53 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:27.264 17:23:53 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:27.264 17:23:53 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:27.264 17:23:53 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:27.264 17:23:53 -- common/autotest_common.sh@1194 -- # return 0 00:07:27.264 17:23:53 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:29.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.803 17:23:55 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:29.803 17:23:55 -- common/autotest_common.sh@1205 -- # local i=0 00:07:29.803 17:23:55 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:29.803 17:23:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.803 17:23:55 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:29.803 17:23:55 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.803 17:23:55 -- common/autotest_common.sh@1217 -- # return 0 00:07:29.803 17:23:55 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.803 17:23:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.803 17:23:55 -- common/autotest_common.sh@10 -- # set +x 00:07:29.803 17:23:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.803 17:23:55 -- target/rpc.sh@81 -- # seq 1 5 00:07:29.803 17:23:55 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:29.803 17:23:55 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:29.803 17:23:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.803 17:23:55 -- common/autotest_common.sh@10 -- # set +x 00:07:29.803 17:23:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.803 17:23:55 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:29.803 17:23:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.803 17:23:55 -- common/autotest_common.sh@10 -- # set +x 00:07:29.803 [2024-04-18 17:23:55.813799] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:29.803 17:23:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.803 17:23:55 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:29.803 17:23:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.803 17:23:55 -- common/autotest_common.sh@10 -- # set +x 00:07:29.803 17:23:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.803 17:23:55 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:29.803 17:23:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.803 17:23:55 -- common/autotest_common.sh@10 -- # set +x 00:07:29.803 17:23:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.803 17:23:55 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:33.105 17:23:59 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:33.105 17:23:59 -- common/autotest_common.sh@1184 -- # local i=0 00:07:33.105 17:23:59 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:33.105 17:23:59 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:33.105 17:23:59 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:35.672 17:24:01 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:35.672 17:24:01 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:35.672 17:24:01 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:35.672 17:24:01 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:35.672 17:24:01 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:35.672 17:24:01 -- common/autotest_common.sh@1194 -- # return 0 00:07:35.672 17:24:01 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:37.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.580 17:24:03 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:37.580 17:24:03 -- common/autotest_common.sh@1205 -- # local i=0 00:07:37.580 17:24:03 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:37.580 17:24:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.580 17:24:03 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:37.580 17:24:03 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:37.580 17:24:03 -- common/autotest_common.sh@1217 -- # return 0 00:07:37.580 17:24:03 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:37.580 17:24:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.580 17:24:03 -- common/autotest_common.sh@10 -- # set +x 00:07:37.580 17:24:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.580 17:24:03 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:37.580 17:24:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.580 17:24:03 -- common/autotest_common.sh@10 -- # set +x 00:07:37.580 17:24:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.580 17:24:03 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:37.580 17:24:03 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:37.580 17:24:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.580 17:24:03 -- common/autotest_common.sh@10 -- # set +x 00:07:37.580 17:24:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.580 17:24:03 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:37.580 17:24:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.580 17:24:03 -- common/autotest_common.sh@10 -- # set +x 00:07:37.580 [2024-04-18 17:24:03.956321] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:37.580 17:24:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.580 17:24:03 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:37.580 17:24:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.580 17:24:03 -- common/autotest_common.sh@10 -- # set +x 00:07:37.580 17:24:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.580 17:24:03 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:37.580 17:24:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.580 17:24:03 -- common/autotest_common.sh@10 -- # set +x 00:07:37.580 17:24:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.580 17:24:03 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:41.774 17:24:07 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:41.774 17:24:07 -- common/autotest_common.sh@1184 -- # local i=0 00:07:41.774 17:24:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:41.774 17:24:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:41.774 17:24:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:43.676 17:24:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:43.676 17:24:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:43.676 17:24:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:43.676 17:24:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:43.676 17:24:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:43.676 17:24:09 -- common/autotest_common.sh@1194 -- # return 0 00:07:43.676 17:24:09 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:45.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.581 17:24:12 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:45.581 17:24:12 -- common/autotest_common.sh@1205 -- # local i=0 00:07:45.581 17:24:12 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:45.581 17:24:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.581 17:24:12 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:45.581 17:24:12 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.581 17:24:12 -- common/autotest_common.sh@1217 -- # return 0 00:07:45.581 17:24:12 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:45.581 17:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.581 17:24:12 -- common/autotest_common.sh@10 -- # set +x 00:07:45.581 17:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.581 17:24:12 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.581 17:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.581 17:24:12 -- common/autotest_common.sh@10 -- # set +x 00:07:45.581 17:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.581 17:24:12 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:45.581 17:24:12 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:45.581 17:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.581 17:24:12 -- common/autotest_common.sh@10 -- # set +x 00:07:45.581 17:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.581 17:24:12 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:45.581 17:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.581 17:24:12 -- common/autotest_common.sh@10 -- # set +x 00:07:45.581 [2024-04-18 17:24:12.083131] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:45.581 17:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.581 17:24:12 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:45.581 17:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.581 17:24:12 -- common/autotest_common.sh@10 -- # set +x 00:07:45.581 17:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.581 17:24:12 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:45.581 17:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.581 17:24:12 -- common/autotest_common.sh@10 -- # set +x 00:07:45.581 17:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.581 17:24:12 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:49.776 17:24:15 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:49.776 17:24:15 -- common/autotest_common.sh@1184 -- # local i=0 00:07:49.776 17:24:15 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:49.776 17:24:15 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:49.776 17:24:15 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:51.683 17:24:17 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:51.683 17:24:17 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:51.683 17:24:17 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:51.683 17:24:17 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:51.683 17:24:17 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:51.683 17:24:17 -- common/autotest_common.sh@1194 -- # return 0 00:07:51.683 17:24:17 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:54.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.222 17:24:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:54.222 17:24:20 -- common/autotest_common.sh@1205 -- # local i=0 00:07:54.222 17:24:20 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:54.222 17:24:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.222 17:24:20 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:54.222 17:24:20 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.222 17:24:20 -- common/autotest_common.sh@1217 -- # return 0 00:07:54.222 17:24:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.222 17:24:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.222 17:24:20 -- common/autotest_common.sh@10 -- # set +x 00:07:54.222 17:24:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.222 17:24:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:54.222 17:24:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.222 17:24:20 -- common/autotest_common.sh@10 -- # set +x 00:07:54.222 17:24:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.222 17:24:20 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:54.222 17:24:20 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:54.222 17:24:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.222 17:24:20 -- common/autotest_common.sh@10 -- # set +x 00:07:54.222 17:24:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.222 17:24:20 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:54.222 17:24:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.222 17:24:20 -- common/autotest_common.sh@10 -- # set +x 00:07:54.222 [2024-04-18 17:24:20.244962] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:54.222 17:24:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.222 17:24:20 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:54.222 17:24:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.222 17:24:20 -- common/autotest_common.sh@10 -- # set +x 00:07:54.222 17:24:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.222 17:24:20 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:54.222 17:24:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.222 17:24:20 -- common/autotest_common.sh@10 -- # set +x 00:07:54.222 17:24:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.222 17:24:20 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:58.418 17:24:24 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:58.418 17:24:24 -- common/autotest_common.sh@1184 -- # local i=0 00:07:58.418 17:24:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:58.418 17:24:24 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:58.418 17:24:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:59.796 17:24:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:59.796 17:24:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:59.796 17:24:26 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:59.796 17:24:26 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:59.796 17:24:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:59.796 17:24:26 -- common/autotest_common.sh@1194 -- # return 0 00:07:59.796 17:24:26 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:02.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.356 17:24:28 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:02.356 17:24:28 -- common/autotest_common.sh@1205 -- # local i=0 00:08:02.356 17:24:28 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:02.356 17:24:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.356 17:24:28 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:02.356 17:24:28 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.356 17:24:28 -- common/autotest_common.sh@1217 -- # return 0 00:08:02.356 17:24:28 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.356 17:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.356 17:24:28 -- common/autotest_common.sh@10 -- # set +x 00:08:02.356 17:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.356 17:24:28 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.356 17:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.356 17:24:28 -- common/autotest_common.sh@10 -- # set +x 00:08:02.356 17:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.356 17:24:28 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:02.356 17:24:28 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:02.356 17:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.356 17:24:28 -- common/autotest_common.sh@10 -- # set +x 00:08:02.356 17:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.356 17:24:28 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:02.356 17:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.356 17:24:28 -- common/autotest_common.sh@10 -- # set +x 00:08:02.356 [2024-04-18 17:24:28.404066] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:02.356 17:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.356 17:24:28 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:02.356 17:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.356 17:24:28 -- common/autotest_common.sh@10 -- # set +x 00:08:02.356 17:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.356 17:24:28 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:02.356 17:24:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.356 17:24:28 -- common/autotest_common.sh@10 -- # set +x 00:08:02.356 17:24:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.356 17:24:28 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:05.647 17:24:32 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:05.647 17:24:32 -- common/autotest_common.sh@1184 -- # local i=0 00:08:05.647 17:24:32 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:05.647 17:24:32 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:05.647 17:24:32 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:08.200 17:24:34 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:08.200 17:24:34 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:08.200 17:24:34 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:08.200 17:24:34 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:08.200 17:24:34 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:08.200 17:24:34 -- common/autotest_common.sh@1194 -- # return 0 00:08:08.200 17:24:34 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:10.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.106 17:24:36 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:10.106 17:24:36 -- common/autotest_common.sh@1205 -- # local i=0 00:08:10.106 17:24:36 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:10.106 17:24:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.106 17:24:36 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:10.106 17:24:36 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.106 17:24:36 -- common/autotest_common.sh@1217 -- # return 0 00:08:10.106 17:24:36 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.106 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.106 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.106 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.106 17:24:36 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.106 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.106 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.106 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.106 17:24:36 -- target/rpc.sh@99 -- # seq 1 5 00:08:10.106 17:24:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:10.106 17:24:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:10.106 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.106 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.106 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.106 17:24:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:10.106 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.106 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.106 [2024-04-18 17:24:36.473893] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:10.106 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.106 17:24:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.106 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.106 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.106 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.106 17:24:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:10.106 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.106 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.106 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.106 17:24:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.106 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.106 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.106 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.106 17:24:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.106 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.106 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.106 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.106 17:24:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:10.106 17:24:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:10.106 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.106 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.106 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.106 17:24:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:10.106 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.106 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.106 [2024-04-18 17:24:36.525998] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:10.106 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.106 17:24:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.106 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.106 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.106 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.106 17:24:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:10.106 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.106 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.106 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.106 17:24:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.106 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.106 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.106 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.106 17:24:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.106 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.106 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.106 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.106 17:24:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:10.106 17:24:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:10.106 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.106 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.106 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.106 17:24:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:10.106 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.106 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.106 [2024-04-18 17:24:36.574546] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:10.106 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.107 17:24:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.107 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.107 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.107 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.107 17:24:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:10.107 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.107 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.107 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.107 17:24:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.107 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.107 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.107 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.107 17:24:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.107 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.107 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.107 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.107 17:24:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:10.107 17:24:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:10.107 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.107 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.107 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.107 17:24:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:10.107 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.107 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.107 [2024-04-18 17:24:36.622975] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:10.107 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.107 17:24:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.107 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.107 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.107 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.107 17:24:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:10.107 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.107 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.107 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.107 17:24:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.107 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.107 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.366 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.366 17:24:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.366 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.366 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.366 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.366 17:24:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:10.366 17:24:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:10.366 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.366 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.366 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.366 17:24:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:10.366 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.367 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.367 [2024-04-18 17:24:36.671446] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:10.367 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.367 17:24:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.367 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.367 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.367 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.367 17:24:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:10.367 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.367 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.367 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.367 17:24:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.367 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.367 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.367 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.367 17:24:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.367 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.367 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.367 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.367 17:24:36 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:10.367 17:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.367 17:24:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.367 17:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.367 17:24:36 -- target/rpc.sh@110 -- # stats='{ 00:08:10.367 "tick_rate": 2700000000, 00:08:10.367 "poll_groups": [ 00:08:10.367 { 00:08:10.367 "name": "nvmf_tgt_poll_group_0", 00:08:10.367 "admin_qpairs": 2, 00:08:10.367 "io_qpairs": 27, 00:08:10.367 "current_admin_qpairs": 0, 00:08:10.367 "current_io_qpairs": 0, 00:08:10.367 "pending_bdev_io": 0, 00:08:10.367 "completed_nvme_io": 30, 00:08:10.367 "transports": [ 00:08:10.367 { 00:08:10.367 "trtype": "RDMA", 00:08:10.367 "pending_data_buffer": 0, 00:08:10.367 "devices": [ 00:08:10.367 { 00:08:10.367 "name": "mlx5_0", 00:08:10.367 "polls": 7564189, 00:08:10.367 "idle_polls": 7563996, 00:08:10.367 "completions": 193, 00:08:10.367 "requests": 96, 00:08:10.367 "request_latency": 13623657, 00:08:10.367 "pending_free_request": 0, 00:08:10.367 "pending_rdma_read": 0, 00:08:10.367 "pending_rdma_write": 0, 00:08:10.367 "pending_rdma_send": 0, 00:08:10.367 "total_send_wrs": 133, 00:08:10.367 "send_doorbell_updates": 97, 00:08:10.367 "total_recv_wrs": 4192, 00:08:10.367 "recv_doorbell_updates": 97 00:08:10.367 }, 00:08:10.367 { 00:08:10.367 "name": "mlx5_1", 00:08:10.367 "polls": 7564189, 00:08:10.367 "idle_polls": 7564189, 00:08:10.367 "completions": 0, 00:08:10.367 "requests": 0, 00:08:10.367 "request_latency": 0, 00:08:10.367 "pending_free_request": 0, 00:08:10.367 "pending_rdma_read": 0, 00:08:10.367 "pending_rdma_write": 0, 00:08:10.367 "pending_rdma_send": 0, 00:08:10.367 "total_send_wrs": 0, 00:08:10.367 "send_doorbell_updates": 0, 00:08:10.367 "total_recv_wrs": 4096, 00:08:10.367 "recv_doorbell_updates": 1 00:08:10.367 } 00:08:10.367 ] 00:08:10.367 } 00:08:10.367 ] 00:08:10.367 }, 00:08:10.367 { 00:08:10.367 "name": "nvmf_tgt_poll_group_1", 00:08:10.367 "admin_qpairs": 2, 00:08:10.367 "io_qpairs": 26, 00:08:10.367 "current_admin_qpairs": 0, 00:08:10.367 "current_io_qpairs": 0, 00:08:10.367 "pending_bdev_io": 0, 00:08:10.367 "completed_nvme_io": 174, 00:08:10.367 "transports": [ 00:08:10.367 { 00:08:10.367 "trtype": "RDMA", 00:08:10.367 "pending_data_buffer": 0, 00:08:10.367 "devices": [ 00:08:10.367 { 00:08:10.367 "name": "mlx5_0", 00:08:10.367 "polls": 7747189, 00:08:10.367 "idle_polls": 7746770, 00:08:10.367 "completions": 480, 00:08:10.367 "requests": 240, 00:08:10.367 "request_latency": 74648325, 00:08:10.367 "pending_free_request": 0, 00:08:10.367 "pending_rdma_read": 0, 00:08:10.367 "pending_rdma_write": 0, 00:08:10.367 "pending_rdma_send": 0, 00:08:10.367 "total_send_wrs": 422, 00:08:10.367 "send_doorbell_updates": 203, 00:08:10.367 "total_recv_wrs": 4336, 00:08:10.367 "recv_doorbell_updates": 204 00:08:10.367 }, 00:08:10.367 { 00:08:10.367 "name": "mlx5_1", 00:08:10.367 "polls": 7747189, 00:08:10.367 "idle_polls": 7747189, 00:08:10.367 "completions": 0, 00:08:10.367 "requests": 0, 00:08:10.367 "request_latency": 0, 00:08:10.367 "pending_free_request": 0, 00:08:10.367 "pending_rdma_read": 0, 00:08:10.367 "pending_rdma_write": 0, 00:08:10.367 "pending_rdma_send": 0, 00:08:10.367 "total_send_wrs": 0, 00:08:10.367 "send_doorbell_updates": 0, 00:08:10.367 "total_recv_wrs": 4096, 00:08:10.367 "recv_doorbell_updates": 1 00:08:10.367 } 00:08:10.367 ] 00:08:10.367 } 00:08:10.367 ] 00:08:10.367 }, 00:08:10.367 { 00:08:10.367 "name": "nvmf_tgt_poll_group_2", 00:08:10.367 "admin_qpairs": 1, 00:08:10.367 "io_qpairs": 26, 00:08:10.367 "current_admin_qpairs": 0, 00:08:10.367 "current_io_qpairs": 0, 00:08:10.367 "pending_bdev_io": 0, 00:08:10.367 "completed_nvme_io": 126, 00:08:10.367 "transports": [ 00:08:10.367 { 00:08:10.367 "trtype": "RDMA", 00:08:10.367 "pending_data_buffer": 0, 00:08:10.367 "devices": [ 00:08:10.367 { 00:08:10.367 "name": "mlx5_0", 00:08:10.367 "polls": 7658828, 00:08:10.367 "idle_polls": 7658553, 00:08:10.367 "completions": 319, 00:08:10.367 "requests": 159, 00:08:10.367 "request_latency": 49273620, 00:08:10.367 "pending_free_request": 0, 00:08:10.367 "pending_rdma_read": 0, 00:08:10.367 "pending_rdma_write": 0, 00:08:10.367 "pending_rdma_send": 0, 00:08:10.367 "total_send_wrs": 277, 00:08:10.367 "send_doorbell_updates": 138, 00:08:10.367 "total_recv_wrs": 4255, 00:08:10.367 "recv_doorbell_updates": 138 00:08:10.367 }, 00:08:10.367 { 00:08:10.367 "name": "mlx5_1", 00:08:10.367 "polls": 7658828, 00:08:10.367 "idle_polls": 7658828, 00:08:10.367 "completions": 0, 00:08:10.367 "requests": 0, 00:08:10.367 "request_latency": 0, 00:08:10.367 "pending_free_request": 0, 00:08:10.367 "pending_rdma_read": 0, 00:08:10.367 "pending_rdma_write": 0, 00:08:10.367 "pending_rdma_send": 0, 00:08:10.367 "total_send_wrs": 0, 00:08:10.367 "send_doorbell_updates": 0, 00:08:10.367 "total_recv_wrs": 4096, 00:08:10.367 "recv_doorbell_updates": 1 00:08:10.367 } 00:08:10.367 ] 00:08:10.367 } 00:08:10.367 ] 00:08:10.367 }, 00:08:10.367 { 00:08:10.367 "name": "nvmf_tgt_poll_group_3", 00:08:10.367 "admin_qpairs": 2, 00:08:10.367 "io_qpairs": 26, 00:08:10.367 "current_admin_qpairs": 0, 00:08:10.367 "current_io_qpairs": 0, 00:08:10.367 "pending_bdev_io": 0, 00:08:10.367 "completed_nvme_io": 125, 00:08:10.367 "transports": [ 00:08:10.367 { 00:08:10.367 "trtype": "RDMA", 00:08:10.367 "pending_data_buffer": 0, 00:08:10.367 "devices": [ 00:08:10.367 { 00:08:10.367 "name": "mlx5_0", 00:08:10.367 "polls": 5768598, 00:08:10.367 "idle_polls": 5768268, 00:08:10.367 "completions": 382, 00:08:10.367 "requests": 191, 00:08:10.367 "request_latency": 57611304, 00:08:10.367 "pending_free_request": 0, 00:08:10.367 "pending_rdma_read": 0, 00:08:10.367 "pending_rdma_write": 0, 00:08:10.367 "pending_rdma_send": 0, 00:08:10.367 "total_send_wrs": 324, 00:08:10.367 "send_doorbell_updates": 166, 00:08:10.367 "total_recv_wrs": 4287, 00:08:10.367 "recv_doorbell_updates": 167 00:08:10.367 }, 00:08:10.367 { 00:08:10.367 "name": "mlx5_1", 00:08:10.367 "polls": 5768598, 00:08:10.367 "idle_polls": 5768598, 00:08:10.367 "completions": 0, 00:08:10.367 "requests": 0, 00:08:10.367 "request_latency": 0, 00:08:10.367 "pending_free_request": 0, 00:08:10.367 "pending_rdma_read": 0, 00:08:10.367 "pending_rdma_write": 0, 00:08:10.367 "pending_rdma_send": 0, 00:08:10.367 "total_send_wrs": 0, 00:08:10.367 "send_doorbell_updates": 0, 00:08:10.367 "total_recv_wrs": 4096, 00:08:10.367 "recv_doorbell_updates": 1 00:08:10.367 } 00:08:10.367 ] 00:08:10.367 } 00:08:10.367 ] 00:08:10.367 } 00:08:10.367 ] 00:08:10.367 }' 00:08:10.367 17:24:36 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:10.367 17:24:36 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:10.367 17:24:36 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:10.367 17:24:36 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:10.367 17:24:36 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:10.367 17:24:36 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:10.367 17:24:36 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:10.367 17:24:36 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:10.367 17:24:36 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:10.367 17:24:36 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:08:10.367 17:24:36 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:08:10.367 17:24:36 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:08:10.367 17:24:36 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:08:10.367 17:24:36 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:08:10.367 17:24:36 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:10.368 17:24:36 -- target/rpc.sh@117 -- # (( 1374 > 0 )) 00:08:10.368 17:24:36 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:08:10.368 17:24:36 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:08:10.368 17:24:36 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:08:10.368 17:24:36 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:10.368 17:24:36 -- target/rpc.sh@118 -- # (( 195156906 > 0 )) 00:08:10.368 17:24:36 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:10.368 17:24:36 -- target/rpc.sh@123 -- # nvmftestfini 00:08:10.368 17:24:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:10.368 17:24:36 -- nvmf/common.sh@117 -- # sync 00:08:10.626 17:24:36 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:10.626 17:24:36 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:10.626 17:24:36 -- nvmf/common.sh@120 -- # set +e 00:08:10.626 17:24:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.626 17:24:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:10.626 rmmod nvme_rdma 00:08:10.626 rmmod nvme_fabrics 00:08:10.626 17:24:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:10.626 17:24:36 -- nvmf/common.sh@124 -- # set -e 00:08:10.626 17:24:36 -- nvmf/common.sh@125 -- # return 0 00:08:10.626 17:24:36 -- nvmf/common.sh@478 -- # '[' -n 1930465 ']' 00:08:10.626 17:24:36 -- nvmf/common.sh@479 -- # killprocess 1930465 00:08:10.626 17:24:36 -- common/autotest_common.sh@936 -- # '[' -z 1930465 ']' 00:08:10.626 17:24:36 -- common/autotest_common.sh@940 -- # kill -0 1930465 00:08:10.626 17:24:36 -- common/autotest_common.sh@941 -- # uname 00:08:10.626 17:24:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:10.627 17:24:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1930465 00:08:10.627 17:24:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:10.627 17:24:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:10.627 17:24:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1930465' 00:08:10.627 killing process with pid 1930465 00:08:10.627 17:24:36 -- common/autotest_common.sh@955 -- # kill 1930465 00:08:10.627 17:24:36 -- common/autotest_common.sh@960 -- # wait 1930465 00:08:10.886 17:24:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:10.886 17:24:37 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:10.886 00:08:10.886 real 1m0.025s 00:08:10.886 user 3m50.376s 00:08:10.886 sys 0m2.947s 00:08:10.886 17:24:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:10.886 17:24:37 -- common/autotest_common.sh@10 -- # set +x 00:08:10.886 ************************************ 00:08:10.886 END TEST nvmf_rpc 00:08:10.886 ************************************ 00:08:10.886 17:24:37 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:08:10.886 17:24:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:10.886 17:24:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.886 17:24:37 -- common/autotest_common.sh@10 -- # set +x 00:08:11.145 ************************************ 00:08:11.145 START TEST nvmf_invalid 00:08:11.145 ************************************ 00:08:11.145 17:24:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:08:11.145 * Looking for test storage... 00:08:11.145 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:11.145 17:24:37 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.145 17:24:37 -- nvmf/common.sh@7 -- # uname -s 00:08:11.145 17:24:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.145 17:24:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.145 17:24:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.145 17:24:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.145 17:24:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.145 17:24:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.145 17:24:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.145 17:24:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.145 17:24:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.145 17:24:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.145 17:24:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:11.145 17:24:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:11.145 17:24:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.145 17:24:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.145 17:24:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.145 17:24:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.145 17:24:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:11.145 17:24:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.145 17:24:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.145 17:24:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.145 17:24:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.146 17:24:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.146 17:24:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.146 17:24:37 -- paths/export.sh@5 -- # export PATH 00:08:11.146 17:24:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.146 17:24:37 -- nvmf/common.sh@47 -- # : 0 00:08:11.146 17:24:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:11.146 17:24:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:11.146 17:24:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.146 17:24:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.146 17:24:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.146 17:24:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:11.146 17:24:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:11.146 17:24:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:11.146 17:24:37 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:11.146 17:24:37 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:11.146 17:24:37 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:11.146 17:24:37 -- target/invalid.sh@14 -- # target=foobar 00:08:11.146 17:24:37 -- target/invalid.sh@16 -- # RANDOM=0 00:08:11.146 17:24:37 -- target/invalid.sh@34 -- # nvmftestinit 00:08:11.146 17:24:37 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:11.146 17:24:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.146 17:24:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:11.146 17:24:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:11.146 17:24:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:11.146 17:24:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.146 17:24:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.146 17:24:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.146 17:24:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:11.146 17:24:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:11.146 17:24:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:11.146 17:24:37 -- common/autotest_common.sh@10 -- # set +x 00:08:13.054 17:24:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:13.054 17:24:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:13.054 17:24:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:13.054 17:24:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:13.054 17:24:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:13.054 17:24:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:13.054 17:24:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:13.054 17:24:39 -- nvmf/common.sh@295 -- # net_devs=() 00:08:13.054 17:24:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:13.054 17:24:39 -- nvmf/common.sh@296 -- # e810=() 00:08:13.054 17:24:39 -- nvmf/common.sh@296 -- # local -ga e810 00:08:13.054 17:24:39 -- nvmf/common.sh@297 -- # x722=() 00:08:13.054 17:24:39 -- nvmf/common.sh@297 -- # local -ga x722 00:08:13.054 17:24:39 -- nvmf/common.sh@298 -- # mlx=() 00:08:13.054 17:24:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:13.054 17:24:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.054 17:24:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.054 17:24:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.054 17:24:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.054 17:24:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.054 17:24:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.054 17:24:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.054 17:24:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.054 17:24:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.054 17:24:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.054 17:24:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.054 17:24:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:13.054 17:24:39 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:13.054 17:24:39 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:13.054 17:24:39 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:13.054 17:24:39 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:13.054 17:24:39 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:13.054 17:24:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:13.054 17:24:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.054 17:24:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:08:13.054 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:08:13.054 17:24:39 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:13.054 17:24:39 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:13.054 17:24:39 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:13.054 17:24:39 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:13.054 17:24:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.054 17:24:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:08:13.054 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:08:13.054 17:24:39 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:13.054 17:24:39 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:13.054 17:24:39 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:13.054 17:24:39 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:13.054 17:24:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:13.054 17:24:39 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:13.054 17:24:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.054 17:24:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.054 17:24:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:13.054 17:24:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.054 17:24:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:08:13.054 Found net devices under 0000:09:00.0: mlx_0_0 00:08:13.054 17:24:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.054 17:24:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.054 17:24:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.054 17:24:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:13.054 17:24:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.054 17:24:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:08:13.054 Found net devices under 0000:09:00.1: mlx_0_1 00:08:13.054 17:24:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.054 17:24:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:13.054 17:24:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:13.054 17:24:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:13.054 17:24:39 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:13.054 17:24:39 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:13.054 17:24:39 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:13.054 17:24:39 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:13.054 17:24:39 -- nvmf/common.sh@58 -- # uname 00:08:13.054 17:24:39 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:13.054 17:24:39 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:13.054 17:24:39 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:13.054 17:24:39 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:13.054 17:24:39 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:13.054 17:24:39 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:13.054 17:24:39 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:13.054 17:24:39 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:13.054 17:24:39 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:13.054 17:24:39 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:13.054 17:24:39 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:13.054 17:24:39 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:13.054 17:24:39 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:13.054 17:24:39 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:13.054 17:24:39 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:13.054 17:24:39 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:13.054 17:24:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:13.054 17:24:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:13.054 17:24:39 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:13.054 17:24:39 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:13.054 17:24:39 -- nvmf/common.sh@105 -- # continue 2 00:08:13.054 17:24:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:13.054 17:24:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:13.054 17:24:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:13.054 17:24:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:13.055 17:24:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:13.055 17:24:39 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:13.055 17:24:39 -- nvmf/common.sh@105 -- # continue 2 00:08:13.055 17:24:39 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:13.055 17:24:39 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:13.055 17:24:39 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:13.055 17:24:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:13.055 17:24:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:13.055 17:24:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:13.055 17:24:39 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:13.055 17:24:39 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:13.055 17:24:39 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:13.055 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:13.055 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:08:13.055 altname enp9s0f0np0 00:08:13.055 inet 192.168.100.8/24 scope global mlx_0_0 00:08:13.055 valid_lft forever preferred_lft forever 00:08:13.055 17:24:39 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:13.055 17:24:39 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:13.055 17:24:39 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:13.055 17:24:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:13.055 17:24:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:13.055 17:24:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:13.055 17:24:39 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:13.055 17:24:39 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:13.055 17:24:39 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:13.055 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:13.055 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:08:13.055 altname enp9s0f1np1 00:08:13.055 inet 192.168.100.9/24 scope global mlx_0_1 00:08:13.055 valid_lft forever preferred_lft forever 00:08:13.055 17:24:39 -- nvmf/common.sh@411 -- # return 0 00:08:13.055 17:24:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:13.055 17:24:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:13.055 17:24:39 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:13.055 17:24:39 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:13.055 17:24:39 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:13.055 17:24:39 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:13.055 17:24:39 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:13.055 17:24:39 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:13.055 17:24:39 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:13.055 17:24:39 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:13.055 17:24:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:13.055 17:24:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:13.055 17:24:39 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:13.055 17:24:39 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:13.055 17:24:39 -- nvmf/common.sh@105 -- # continue 2 00:08:13.055 17:24:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:13.055 17:24:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:13.055 17:24:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:13.055 17:24:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:13.055 17:24:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:13.055 17:24:39 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:13.055 17:24:39 -- nvmf/common.sh@105 -- # continue 2 00:08:13.055 17:24:39 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:13.055 17:24:39 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:13.055 17:24:39 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:13.055 17:24:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:13.055 17:24:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:13.055 17:24:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:13.055 17:24:39 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:13.055 17:24:39 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:13.055 17:24:39 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:13.055 17:24:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:13.055 17:24:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:13.055 17:24:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:13.055 17:24:39 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:13.055 192.168.100.9' 00:08:13.055 17:24:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:13.055 192.168.100.9' 00:08:13.055 17:24:39 -- nvmf/common.sh@446 -- # head -n 1 00:08:13.055 17:24:39 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:13.055 17:24:39 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:13.055 192.168.100.9' 00:08:13.055 17:24:39 -- nvmf/common.sh@447 -- # tail -n +2 00:08:13.055 17:24:39 -- nvmf/common.sh@447 -- # head -n 1 00:08:13.055 17:24:39 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:13.055 17:24:39 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:13.055 17:24:39 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:13.055 17:24:39 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:13.055 17:24:39 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:13.055 17:24:39 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:13.315 17:24:39 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:13.315 17:24:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:13.315 17:24:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:13.315 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:13.315 17:24:39 -- nvmf/common.sh@470 -- # nvmfpid=1939805 00:08:13.315 17:24:39 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:13.315 17:24:39 -- nvmf/common.sh@471 -- # waitforlisten 1939805 00:08:13.315 17:24:39 -- common/autotest_common.sh@817 -- # '[' -z 1939805 ']' 00:08:13.315 17:24:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.315 17:24:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:13.315 17:24:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.315 17:24:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:13.315 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:13.315 [2024-04-18 17:24:39.652480] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:08:13.315 [2024-04-18 17:24:39.652571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.315 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.315 [2024-04-18 17:24:39.716065] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.315 [2024-04-18 17:24:39.833178] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.315 [2024-04-18 17:24:39.833228] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.315 [2024-04-18 17:24:39.833252] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.315 [2024-04-18 17:24:39.833264] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.315 [2024-04-18 17:24:39.833275] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.315 [2024-04-18 17:24:39.833338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.315 [2024-04-18 17:24:39.833375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.315 [2024-04-18 17:24:39.833426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.315 [2024-04-18 17:24:39.833430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.573 17:24:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:13.573 17:24:39 -- common/autotest_common.sh@850 -- # return 0 00:08:13.573 17:24:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:13.573 17:24:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:13.573 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:08:13.573 17:24:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.573 17:24:39 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:13.573 17:24:39 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6090 00:08:13.831 [2024-04-18 17:24:40.221662] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:13.831 17:24:40 -- target/invalid.sh@40 -- # out='request: 00:08:13.831 { 00:08:13.831 "nqn": "nqn.2016-06.io.spdk:cnode6090", 00:08:13.831 "tgt_name": "foobar", 00:08:13.831 "method": "nvmf_create_subsystem", 00:08:13.831 "req_id": 1 00:08:13.831 } 00:08:13.831 Got JSON-RPC error response 00:08:13.831 response: 00:08:13.831 { 00:08:13.831 "code": -32603, 00:08:13.831 "message": "Unable to find target foobar" 00:08:13.831 }' 00:08:13.831 17:24:40 -- target/invalid.sh@41 -- # [[ request: 00:08:13.831 { 00:08:13.831 "nqn": "nqn.2016-06.io.spdk:cnode6090", 00:08:13.831 "tgt_name": "foobar", 00:08:13.831 "method": "nvmf_create_subsystem", 00:08:13.831 "req_id": 1 00:08:13.831 } 00:08:13.831 Got JSON-RPC error response 00:08:13.831 response: 00:08:13.831 { 00:08:13.831 "code": -32603, 00:08:13.831 "message": "Unable to find target foobar" 00:08:13.831 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:13.831 17:24:40 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:13.831 17:24:40 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17347 00:08:14.090 [2024-04-18 17:24:40.466522] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17347: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:14.090 17:24:40 -- target/invalid.sh@45 -- # out='request: 00:08:14.090 { 00:08:14.090 "nqn": "nqn.2016-06.io.spdk:cnode17347", 00:08:14.090 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:14.090 "method": "nvmf_create_subsystem", 00:08:14.090 "req_id": 1 00:08:14.090 } 00:08:14.090 Got JSON-RPC error response 00:08:14.090 response: 00:08:14.090 { 00:08:14.090 "code": -32602, 00:08:14.090 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:14.090 }' 00:08:14.090 17:24:40 -- target/invalid.sh@46 -- # [[ request: 00:08:14.090 { 00:08:14.090 "nqn": "nqn.2016-06.io.spdk:cnode17347", 00:08:14.090 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:14.090 "method": "nvmf_create_subsystem", 00:08:14.090 "req_id": 1 00:08:14.090 } 00:08:14.090 Got JSON-RPC error response 00:08:14.090 response: 00:08:14.090 { 00:08:14.090 "code": -32602, 00:08:14.090 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:14.090 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:14.090 17:24:40 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:14.090 17:24:40 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16228 00:08:14.348 [2024-04-18 17:24:40.707278] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16228: invalid model number 'SPDK_Controller' 00:08:14.348 17:24:40 -- target/invalid.sh@50 -- # out='request: 00:08:14.348 { 00:08:14.348 "nqn": "nqn.2016-06.io.spdk:cnode16228", 00:08:14.348 "model_number": "SPDK_Controller\u001f", 00:08:14.348 "method": "nvmf_create_subsystem", 00:08:14.348 "req_id": 1 00:08:14.348 } 00:08:14.348 Got JSON-RPC error response 00:08:14.348 response: 00:08:14.348 { 00:08:14.348 "code": -32602, 00:08:14.349 "message": "Invalid MN SPDK_Controller\u001f" 00:08:14.349 }' 00:08:14.349 17:24:40 -- target/invalid.sh@51 -- # [[ request: 00:08:14.349 { 00:08:14.349 "nqn": "nqn.2016-06.io.spdk:cnode16228", 00:08:14.349 "model_number": "SPDK_Controller\u001f", 00:08:14.349 "method": "nvmf_create_subsystem", 00:08:14.349 "req_id": 1 00:08:14.349 } 00:08:14.349 Got JSON-RPC error response 00:08:14.349 response: 00:08:14.349 { 00:08:14.349 "code": -32602, 00:08:14.349 "message": "Invalid MN SPDK_Controller\u001f" 00:08:14.349 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:14.349 17:24:40 -- target/invalid.sh@54 -- # gen_random_s 21 00:08:14.349 17:24:40 -- target/invalid.sh@19 -- # local length=21 ll 00:08:14.349 17:24:40 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:14.349 17:24:40 -- target/invalid.sh@21 -- # local chars 00:08:14.349 17:24:40 -- target/invalid.sh@22 -- # local string 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 79 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+=O 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 122 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+=z 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 77 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+=M 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 89 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x59' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+=Y 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 85 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x55' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+=U 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 102 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+=f 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 112 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x70' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+=p 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 61 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+== 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 42 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+='*' 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 120 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+=x 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 46 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+=. 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 61 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+== 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 34 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+='"' 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 96 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+='`' 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 112 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x70' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+=p 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 68 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+=D 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 66 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+=B 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 94 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+='^' 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 87 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+=W 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 46 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+=. 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # printf %x 102 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:14.349 17:24:40 -- target/invalid.sh@25 -- # string+=f 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.349 17:24:40 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.349 17:24:40 -- target/invalid.sh@28 -- # [[ O == \- ]] 00:08:14.349 17:24:40 -- target/invalid.sh@31 -- # echo 'OzMYUfp=*x.="`pDB^W.f' 00:08:14.349 17:24:40 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'OzMYUfp=*x.="`pDB^W.f' nqn.2016-06.io.spdk:cnode20206 00:08:14.608 [2024-04-18 17:24:41.040411] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20206: invalid serial number 'OzMYUfp=*x.="`pDB^W.f' 00:08:14.608 17:24:41 -- target/invalid.sh@54 -- # out='request: 00:08:14.608 { 00:08:14.608 "nqn": "nqn.2016-06.io.spdk:cnode20206", 00:08:14.608 "serial_number": "OzMYUfp=*x.=\"`pDB^W.f", 00:08:14.608 "method": "nvmf_create_subsystem", 00:08:14.608 "req_id": 1 00:08:14.608 } 00:08:14.608 Got JSON-RPC error response 00:08:14.608 response: 00:08:14.608 { 00:08:14.608 "code": -32602, 00:08:14.608 "message": "Invalid SN OzMYUfp=*x.=\"`pDB^W.f" 00:08:14.608 }' 00:08:14.608 17:24:41 -- target/invalid.sh@55 -- # [[ request: 00:08:14.608 { 00:08:14.608 "nqn": "nqn.2016-06.io.spdk:cnode20206", 00:08:14.608 "serial_number": "OzMYUfp=*x.=\"`pDB^W.f", 00:08:14.608 "method": "nvmf_create_subsystem", 00:08:14.608 "req_id": 1 00:08:14.608 } 00:08:14.608 Got JSON-RPC error response 00:08:14.608 response: 00:08:14.608 { 00:08:14.608 "code": -32602, 00:08:14.608 "message": "Invalid SN OzMYUfp=*x.=\"`pDB^W.f" 00:08:14.608 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:14.608 17:24:41 -- target/invalid.sh@58 -- # gen_random_s 41 00:08:14.608 17:24:41 -- target/invalid.sh@19 -- # local length=41 ll 00:08:14.608 17:24:41 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:14.608 17:24:41 -- target/invalid.sh@21 -- # local chars 00:08:14.608 17:24:41 -- target/invalid.sh@22 -- # local string 00:08:14.608 17:24:41 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:14.608 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 77 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=M 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 103 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x67' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=g 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 62 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+='>' 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 108 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=l 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 89 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x59' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=Y 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 48 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x30' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=0 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 86 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x56' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=V 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 41 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=')' 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 97 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x61' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=a 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 102 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=f 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 71 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=G 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 95 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=_ 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 123 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+='{' 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 110 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=n 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 40 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x28' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+='(' 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 92 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+='\' 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 53 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=5 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 78 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=N 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 98 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=b 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 47 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=/ 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 125 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+='}' 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 46 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=. 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # printf %x 37 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:14.609 17:24:41 -- target/invalid.sh@25 -- # string+=% 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.609 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # printf %x 83 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x53' 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # string+=S 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # printf %x 94 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # string+='^' 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # printf %x 67 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # string+=C 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # printf %x 70 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # string+=F 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # printf %x 105 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x69' 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # string+=i 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # printf %x 99 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # string+=c 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # printf %x 61 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # string+== 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # printf %x 67 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # string+=C 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # printf %x 102 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # string+=f 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.868 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # printf %x 61 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:14.868 17:24:41 -- target/invalid.sh@25 -- # string+== 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # printf %x 127 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # string+=$'\177' 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # printf %x 45 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # string+=- 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # printf %x 73 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x49' 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # string+=I 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # printf %x 62 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # string+='>' 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # printf %x 103 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x67' 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # string+=g 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # printf %x 114 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # string+=r 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # printf %x 86 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x56' 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # string+=V 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # printf %x 89 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # echo -e '\x59' 00:08:14.869 17:24:41 -- target/invalid.sh@25 -- # string+=Y 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:14.869 17:24:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:14.869 17:24:41 -- target/invalid.sh@28 -- # [[ M == \- ]] 00:08:14.869 17:24:41 -- target/invalid.sh@31 -- # echo 'Mg>lY0V)afG_{n(\5Nb/}.%S^CFic=Cf=-I>grVY' 00:08:14.869 17:24:41 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Mg>lY0V)afG_{n(\5Nb/}.%S^CFic=Cf=-I>grVY' nqn.2016-06.io.spdk:cnode23926 00:08:15.127 [2024-04-18 17:24:41.433768] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23926: invalid model number 'Mg>lY0V)afG_{n(\5Nb/}.%S^CFic=Cf=-I>grVY' 00:08:15.127 17:24:41 -- target/invalid.sh@58 -- # out='request: 00:08:15.127 { 00:08:15.127 "nqn": "nqn.2016-06.io.spdk:cnode23926", 00:08:15.127 "model_number": "Mg>lY0V)afG_{n(\\5Nb/}.%S^CFic=Cf=\u007f-I>grVY", 00:08:15.127 "method": "nvmf_create_subsystem", 00:08:15.127 "req_id": 1 00:08:15.127 } 00:08:15.127 Got JSON-RPC error response 00:08:15.127 response: 00:08:15.127 { 00:08:15.127 "code": -32602, 00:08:15.127 "message": "Invalid MN Mg>lY0V)afG_{n(\\5Nb/}.%S^CFic=Cf=\u007f-I>grVY" 00:08:15.127 }' 00:08:15.127 17:24:41 -- target/invalid.sh@59 -- # [[ request: 00:08:15.127 { 00:08:15.127 "nqn": "nqn.2016-06.io.spdk:cnode23926", 00:08:15.127 "model_number": "Mg>lY0V)afG_{n(\\5Nb/}.%S^CFic=Cf=\u007f-I>grVY", 00:08:15.127 "method": "nvmf_create_subsystem", 00:08:15.127 "req_id": 1 00:08:15.127 } 00:08:15.127 Got JSON-RPC error response 00:08:15.127 response: 00:08:15.127 { 00:08:15.127 "code": -32602, 00:08:15.127 "message": "Invalid MN Mg>lY0V)afG_{n(\\5Nb/}.%S^CFic=Cf=\u007f-I>grVY" 00:08:15.127 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:15.127 17:24:41 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:08:15.385 [2024-04-18 17:24:41.733766] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b2e200/0x1b326f0) succeed. 00:08:15.385 [2024-04-18 17:24:41.744406] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b2f7f0/0x1b73d80) succeed. 00:08:15.385 17:24:41 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:15.953 17:24:42 -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:08:15.953 17:24:42 -- target/invalid.sh@67 -- # echo '192.168.100.8 00:08:15.953 192.168.100.9' 00:08:15.953 17:24:42 -- target/invalid.sh@67 -- # head -n 1 00:08:15.953 17:24:42 -- target/invalid.sh@67 -- # IP=192.168.100.8 00:08:15.953 17:24:42 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:08:15.953 [2024-04-18 17:24:42.434072] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:15.953 17:24:42 -- target/invalid.sh@69 -- # out='request: 00:08:15.953 { 00:08:15.953 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:15.953 "listen_address": { 00:08:15.953 "trtype": "rdma", 00:08:15.953 "traddr": "192.168.100.8", 00:08:15.953 "trsvcid": "4421" 00:08:15.953 }, 00:08:15.953 "method": "nvmf_subsystem_remove_listener", 00:08:15.953 "req_id": 1 00:08:15.953 } 00:08:15.953 Got JSON-RPC error response 00:08:15.953 response: 00:08:15.953 { 00:08:15.953 "code": -32602, 00:08:15.953 "message": "Invalid parameters" 00:08:15.953 }' 00:08:15.953 17:24:42 -- target/invalid.sh@70 -- # [[ request: 00:08:15.953 { 00:08:15.953 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:15.953 "listen_address": { 00:08:15.953 "trtype": "rdma", 00:08:15.953 "traddr": "192.168.100.8", 00:08:15.953 "trsvcid": "4421" 00:08:15.953 }, 00:08:15.953 "method": "nvmf_subsystem_remove_listener", 00:08:15.953 "req_id": 1 00:08:15.953 } 00:08:15.953 Got JSON-RPC error response 00:08:15.953 response: 00:08:15.953 { 00:08:15.953 "code": -32602, 00:08:15.953 "message": "Invalid parameters" 00:08:15.953 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:15.953 17:24:42 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17225 -i 0 00:08:16.211 [2024-04-18 17:24:42.670841] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17225: invalid cntlid range [0-65519] 00:08:16.211 17:24:42 -- target/invalid.sh@73 -- # out='request: 00:08:16.211 { 00:08:16.211 "nqn": "nqn.2016-06.io.spdk:cnode17225", 00:08:16.211 "min_cntlid": 0, 00:08:16.211 "method": "nvmf_create_subsystem", 00:08:16.211 "req_id": 1 00:08:16.211 } 00:08:16.211 Got JSON-RPC error response 00:08:16.211 response: 00:08:16.211 { 00:08:16.211 "code": -32602, 00:08:16.211 "message": "Invalid cntlid range [0-65519]" 00:08:16.211 }' 00:08:16.211 17:24:42 -- target/invalid.sh@74 -- # [[ request: 00:08:16.211 { 00:08:16.211 "nqn": "nqn.2016-06.io.spdk:cnode17225", 00:08:16.211 "min_cntlid": 0, 00:08:16.211 "method": "nvmf_create_subsystem", 00:08:16.211 "req_id": 1 00:08:16.211 } 00:08:16.211 Got JSON-RPC error response 00:08:16.211 response: 00:08:16.211 { 00:08:16.211 "code": -32602, 00:08:16.212 "message": "Invalid cntlid range [0-65519]" 00:08:16.212 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:16.212 17:24:42 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30827 -i 65520 00:08:16.469 [2024-04-18 17:24:42.907696] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30827: invalid cntlid range [65520-65519] 00:08:16.469 17:24:42 -- target/invalid.sh@75 -- # out='request: 00:08:16.469 { 00:08:16.469 "nqn": "nqn.2016-06.io.spdk:cnode30827", 00:08:16.469 "min_cntlid": 65520, 00:08:16.469 "method": "nvmf_create_subsystem", 00:08:16.469 "req_id": 1 00:08:16.469 } 00:08:16.469 Got JSON-RPC error response 00:08:16.469 response: 00:08:16.469 { 00:08:16.469 "code": -32602, 00:08:16.469 "message": "Invalid cntlid range [65520-65519]" 00:08:16.469 }' 00:08:16.469 17:24:42 -- target/invalid.sh@76 -- # [[ request: 00:08:16.469 { 00:08:16.469 "nqn": "nqn.2016-06.io.spdk:cnode30827", 00:08:16.469 "min_cntlid": 65520, 00:08:16.469 "method": "nvmf_create_subsystem", 00:08:16.469 "req_id": 1 00:08:16.469 } 00:08:16.469 Got JSON-RPC error response 00:08:16.469 response: 00:08:16.469 { 00:08:16.469 "code": -32602, 00:08:16.469 "message": "Invalid cntlid range [65520-65519]" 00:08:16.469 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:16.469 17:24:42 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4628 -I 0 00:08:16.728 [2024-04-18 17:24:43.144576] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4628: invalid cntlid range [1-0] 00:08:16.728 17:24:43 -- target/invalid.sh@77 -- # out='request: 00:08:16.728 { 00:08:16.728 "nqn": "nqn.2016-06.io.spdk:cnode4628", 00:08:16.728 "max_cntlid": 0, 00:08:16.728 "method": "nvmf_create_subsystem", 00:08:16.728 "req_id": 1 00:08:16.728 } 00:08:16.728 Got JSON-RPC error response 00:08:16.728 response: 00:08:16.728 { 00:08:16.728 "code": -32602, 00:08:16.728 "message": "Invalid cntlid range [1-0]" 00:08:16.728 }' 00:08:16.728 17:24:43 -- target/invalid.sh@78 -- # [[ request: 00:08:16.728 { 00:08:16.728 "nqn": "nqn.2016-06.io.spdk:cnode4628", 00:08:16.728 "max_cntlid": 0, 00:08:16.728 "method": "nvmf_create_subsystem", 00:08:16.728 "req_id": 1 00:08:16.728 } 00:08:16.728 Got JSON-RPC error response 00:08:16.728 response: 00:08:16.728 { 00:08:16.728 "code": -32602, 00:08:16.728 "message": "Invalid cntlid range [1-0]" 00:08:16.728 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:16.728 17:24:43 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31915 -I 65520 00:08:16.986 [2024-04-18 17:24:43.397466] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31915: invalid cntlid range [1-65520] 00:08:16.986 17:24:43 -- target/invalid.sh@79 -- # out='request: 00:08:16.986 { 00:08:16.986 "nqn": "nqn.2016-06.io.spdk:cnode31915", 00:08:16.986 "max_cntlid": 65520, 00:08:16.986 "method": "nvmf_create_subsystem", 00:08:16.986 "req_id": 1 00:08:16.986 } 00:08:16.986 Got JSON-RPC error response 00:08:16.986 response: 00:08:16.986 { 00:08:16.986 "code": -32602, 00:08:16.986 "message": "Invalid cntlid range [1-65520]" 00:08:16.986 }' 00:08:16.986 17:24:43 -- target/invalid.sh@80 -- # [[ request: 00:08:16.986 { 00:08:16.986 "nqn": "nqn.2016-06.io.spdk:cnode31915", 00:08:16.986 "max_cntlid": 65520, 00:08:16.986 "method": "nvmf_create_subsystem", 00:08:16.986 "req_id": 1 00:08:16.986 } 00:08:16.986 Got JSON-RPC error response 00:08:16.986 response: 00:08:16.986 { 00:08:16.986 "code": -32602, 00:08:16.986 "message": "Invalid cntlid range [1-65520]" 00:08:16.986 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:16.986 17:24:43 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31218 -i 6 -I 5 00:08:17.245 [2024-04-18 17:24:43.650376] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31218: invalid cntlid range [6-5] 00:08:17.245 17:24:43 -- target/invalid.sh@83 -- # out='request: 00:08:17.245 { 00:08:17.245 "nqn": "nqn.2016-06.io.spdk:cnode31218", 00:08:17.245 "min_cntlid": 6, 00:08:17.245 "max_cntlid": 5, 00:08:17.245 "method": "nvmf_create_subsystem", 00:08:17.245 "req_id": 1 00:08:17.245 } 00:08:17.245 Got JSON-RPC error response 00:08:17.245 response: 00:08:17.245 { 00:08:17.245 "code": -32602, 00:08:17.245 "message": "Invalid cntlid range [6-5]" 00:08:17.245 }' 00:08:17.245 17:24:43 -- target/invalid.sh@84 -- # [[ request: 00:08:17.245 { 00:08:17.245 "nqn": "nqn.2016-06.io.spdk:cnode31218", 00:08:17.245 "min_cntlid": 6, 00:08:17.245 "max_cntlid": 5, 00:08:17.245 "method": "nvmf_create_subsystem", 00:08:17.245 "req_id": 1 00:08:17.245 } 00:08:17.245 Got JSON-RPC error response 00:08:17.245 response: 00:08:17.245 { 00:08:17.245 "code": -32602, 00:08:17.245 "message": "Invalid cntlid range [6-5]" 00:08:17.245 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:17.245 17:24:43 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:17.245 17:24:43 -- target/invalid.sh@87 -- # out='request: 00:08:17.245 { 00:08:17.245 "name": "foobar", 00:08:17.245 "method": "nvmf_delete_target", 00:08:17.245 "req_id": 1 00:08:17.245 } 00:08:17.245 Got JSON-RPC error response 00:08:17.245 response: 00:08:17.245 { 00:08:17.245 "code": -32602, 00:08:17.245 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:17.245 }' 00:08:17.245 17:24:43 -- target/invalid.sh@88 -- # [[ request: 00:08:17.245 { 00:08:17.245 "name": "foobar", 00:08:17.245 "method": "nvmf_delete_target", 00:08:17.245 "req_id": 1 00:08:17.245 } 00:08:17.245 Got JSON-RPC error response 00:08:17.245 response: 00:08:17.245 { 00:08:17.245 "code": -32602, 00:08:17.245 "message": "The specified target doesn't exist, cannot delete it." 00:08:17.245 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:17.245 17:24:43 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:17.245 17:24:43 -- target/invalid.sh@91 -- # nvmftestfini 00:08:17.245 17:24:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:17.245 17:24:43 -- nvmf/common.sh@117 -- # sync 00:08:17.245 17:24:43 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:17.245 17:24:43 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:17.245 17:24:43 -- nvmf/common.sh@120 -- # set +e 00:08:17.245 17:24:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:17.245 17:24:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:17.245 rmmod nvme_rdma 00:08:17.505 rmmod nvme_fabrics 00:08:17.506 17:24:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:17.506 17:24:43 -- nvmf/common.sh@124 -- # set -e 00:08:17.506 17:24:43 -- nvmf/common.sh@125 -- # return 0 00:08:17.506 17:24:43 -- nvmf/common.sh@478 -- # '[' -n 1939805 ']' 00:08:17.506 17:24:43 -- nvmf/common.sh@479 -- # killprocess 1939805 00:08:17.506 17:24:43 -- common/autotest_common.sh@936 -- # '[' -z 1939805 ']' 00:08:17.506 17:24:43 -- common/autotest_common.sh@940 -- # kill -0 1939805 00:08:17.506 17:24:43 -- common/autotest_common.sh@941 -- # uname 00:08:17.506 17:24:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:17.506 17:24:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1939805 00:08:17.506 17:24:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:17.506 17:24:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:17.506 17:24:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1939805' 00:08:17.506 killing process with pid 1939805 00:08:17.506 17:24:43 -- common/autotest_common.sh@955 -- # kill 1939805 00:08:17.506 17:24:43 -- common/autotest_common.sh@960 -- # wait 1939805 00:08:17.764 17:24:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:17.764 17:24:44 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:17.764 00:08:17.764 real 0m6.731s 00:08:17.764 user 0m20.793s 00:08:17.764 sys 0m2.395s 00:08:17.764 17:24:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:17.764 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:08:17.764 ************************************ 00:08:17.765 END TEST nvmf_invalid 00:08:17.765 ************************************ 00:08:17.765 17:24:44 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:17.765 17:24:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:17.765 17:24:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.765 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:08:18.023 ************************************ 00:08:18.023 START TEST nvmf_abort 00:08:18.023 ************************************ 00:08:18.023 17:24:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:08:18.023 * Looking for test storage... 00:08:18.023 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:18.023 17:24:44 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.023 17:24:44 -- nvmf/common.sh@7 -- # uname -s 00:08:18.023 17:24:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.023 17:24:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.023 17:24:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.023 17:24:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.023 17:24:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.023 17:24:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.023 17:24:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.023 17:24:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.023 17:24:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.023 17:24:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.023 17:24:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:18.023 17:24:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:18.023 17:24:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.023 17:24:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.023 17:24:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.023 17:24:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.023 17:24:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:18.023 17:24:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.023 17:24:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.023 17:24:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.023 17:24:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.024 17:24:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.024 17:24:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.024 17:24:44 -- paths/export.sh@5 -- # export PATH 00:08:18.024 17:24:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.024 17:24:44 -- nvmf/common.sh@47 -- # : 0 00:08:18.024 17:24:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:18.024 17:24:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:18.024 17:24:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.024 17:24:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.024 17:24:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.024 17:24:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:18.024 17:24:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:18.024 17:24:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:18.024 17:24:44 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:18.024 17:24:44 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:18.024 17:24:44 -- target/abort.sh@14 -- # nvmftestinit 00:08:18.024 17:24:44 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:18.024 17:24:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.024 17:24:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:18.024 17:24:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:18.024 17:24:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:18.024 17:24:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.024 17:24:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.024 17:24:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.024 17:24:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:18.024 17:24:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:18.024 17:24:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:18.024 17:24:44 -- common/autotest_common.sh@10 -- # set +x 00:08:19.964 17:24:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:19.964 17:24:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.964 17:24:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.964 17:24:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.964 17:24:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.964 17:24:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.964 17:24:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.964 17:24:46 -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.964 17:24:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.964 17:24:46 -- nvmf/common.sh@296 -- # e810=() 00:08:19.964 17:24:46 -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.964 17:24:46 -- nvmf/common.sh@297 -- # x722=() 00:08:19.964 17:24:46 -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.964 17:24:46 -- nvmf/common.sh@298 -- # mlx=() 00:08:19.964 17:24:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.964 17:24:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.964 17:24:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.964 17:24:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.964 17:24:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.964 17:24:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.964 17:24:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.964 17:24:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.964 17:24:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.964 17:24:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.964 17:24:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.964 17:24:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.964 17:24:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.964 17:24:46 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:19.964 17:24:46 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:19.964 17:24:46 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:19.964 17:24:46 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:19.964 17:24:46 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:19.964 17:24:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.964 17:24:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.964 17:24:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:08:19.964 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:08:19.964 17:24:46 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:19.964 17:24:46 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:19.964 17:24:46 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:19.964 17:24:46 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:19.964 17:24:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.964 17:24:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:08:19.964 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:08:19.964 17:24:46 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:19.964 17:24:46 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:19.964 17:24:46 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:19.964 17:24:46 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:19.964 17:24:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.964 17:24:46 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:19.964 17:24:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.964 17:24:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.964 17:24:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:19.964 17:24:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.964 17:24:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:08:19.964 Found net devices under 0000:09:00.0: mlx_0_0 00:08:19.964 17:24:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.964 17:24:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.964 17:24:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.964 17:24:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:19.964 17:24:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.964 17:24:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:08:19.964 Found net devices under 0000:09:00.1: mlx_0_1 00:08:19.964 17:24:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.964 17:24:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:19.964 17:24:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:19.964 17:24:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:19.964 17:24:46 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:19.964 17:24:46 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:19.964 17:24:46 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:19.964 17:24:46 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:19.964 17:24:46 -- nvmf/common.sh@58 -- # uname 00:08:19.964 17:24:46 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:19.964 17:24:46 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:19.964 17:24:46 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:19.964 17:24:46 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:19.964 17:24:46 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:19.964 17:24:46 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:19.964 17:24:46 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:19.964 17:24:46 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:19.964 17:24:46 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:19.964 17:24:46 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:19.964 17:24:46 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:19.965 17:24:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:19.965 17:24:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:19.965 17:24:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:19.965 17:24:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:19.965 17:24:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:19.965 17:24:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:19.965 17:24:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.965 17:24:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:19.965 17:24:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:19.965 17:24:46 -- nvmf/common.sh@105 -- # continue 2 00:08:19.965 17:24:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:19.965 17:24:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.965 17:24:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:19.965 17:24:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.965 17:24:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:19.965 17:24:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:19.965 17:24:46 -- nvmf/common.sh@105 -- # continue 2 00:08:19.965 17:24:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:19.965 17:24:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:19.965 17:24:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:19.965 17:24:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:19.965 17:24:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:19.965 17:24:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:19.965 17:24:46 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:19.965 17:24:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:19.965 17:24:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:19.965 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:19.965 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:08:19.965 altname enp9s0f0np0 00:08:19.965 inet 192.168.100.8/24 scope global mlx_0_0 00:08:19.965 valid_lft forever preferred_lft forever 00:08:19.965 17:24:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:19.965 17:24:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:19.965 17:24:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:19.965 17:24:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:19.965 17:24:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:19.965 17:24:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:19.965 17:24:46 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:19.965 17:24:46 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:19.965 17:24:46 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:19.965 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:19.965 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:08:19.965 altname enp9s0f1np1 00:08:19.965 inet 192.168.100.9/24 scope global mlx_0_1 00:08:19.965 valid_lft forever preferred_lft forever 00:08:19.965 17:24:46 -- nvmf/common.sh@411 -- # return 0 00:08:19.965 17:24:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:19.965 17:24:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:19.965 17:24:46 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:19.965 17:24:46 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:19.965 17:24:46 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:19.965 17:24:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:19.965 17:24:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:19.965 17:24:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:19.965 17:24:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:19.965 17:24:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:19.965 17:24:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:19.965 17:24:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.965 17:24:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:19.965 17:24:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:19.965 17:24:46 -- nvmf/common.sh@105 -- # continue 2 00:08:19.965 17:24:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:19.965 17:24:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.965 17:24:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:19.965 17:24:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:19.965 17:24:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:19.965 17:24:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:19.965 17:24:46 -- nvmf/common.sh@105 -- # continue 2 00:08:19.965 17:24:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:19.965 17:24:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:19.965 17:24:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:19.965 17:24:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:19.965 17:24:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:19.965 17:24:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:19.965 17:24:46 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:19.965 17:24:46 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:19.965 17:24:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:19.965 17:24:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:19.965 17:24:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:19.965 17:24:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:19.965 17:24:46 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:19.965 192.168.100.9' 00:08:19.965 17:24:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:19.965 192.168.100.9' 00:08:19.965 17:24:46 -- nvmf/common.sh@446 -- # head -n 1 00:08:19.965 17:24:46 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:19.965 17:24:46 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:19.965 192.168.100.9' 00:08:19.965 17:24:46 -- nvmf/common.sh@447 -- # tail -n +2 00:08:19.965 17:24:46 -- nvmf/common.sh@447 -- # head -n 1 00:08:19.965 17:24:46 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:19.965 17:24:46 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:19.965 17:24:46 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:19.965 17:24:46 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:19.965 17:24:46 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:19.965 17:24:46 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:19.965 17:24:46 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:19.965 17:24:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:19.965 17:24:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:19.965 17:24:46 -- common/autotest_common.sh@10 -- # set +x 00:08:20.225 17:24:46 -- nvmf/common.sh@470 -- # nvmfpid=1942192 00:08:20.225 17:24:46 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:20.225 17:24:46 -- nvmf/common.sh@471 -- # waitforlisten 1942192 00:08:20.225 17:24:46 -- common/autotest_common.sh@817 -- # '[' -z 1942192 ']' 00:08:20.225 17:24:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.225 17:24:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:20.225 17:24:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.225 17:24:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:20.225 17:24:46 -- common/autotest_common.sh@10 -- # set +x 00:08:20.225 [2024-04-18 17:24:46.526845] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:08:20.225 [2024-04-18 17:24:46.526922] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.226 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.226 [2024-04-18 17:24:46.590589] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:20.226 [2024-04-18 17:24:46.709767] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.226 [2024-04-18 17:24:46.709836] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.226 [2024-04-18 17:24:46.709861] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.226 [2024-04-18 17:24:46.709874] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.226 [2024-04-18 17:24:46.709886] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.226 [2024-04-18 17:24:46.709975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.226 [2024-04-18 17:24:46.710028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.226 [2024-04-18 17:24:46.710031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.484 17:24:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:20.484 17:24:46 -- common/autotest_common.sh@850 -- # return 0 00:08:20.484 17:24:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:20.484 17:24:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:20.484 17:24:46 -- common/autotest_common.sh@10 -- # set +x 00:08:20.484 17:24:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.484 17:24:46 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:08:20.484 17:24:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.484 17:24:46 -- common/autotest_common.sh@10 -- # set +x 00:08:20.484 [2024-04-18 17:24:46.890591] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9e7380/0x9eb870) succeed. 00:08:20.484 [2024-04-18 17:24:46.900969] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9e88d0/0xa2cf00) succeed. 00:08:20.484 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.484 17:24:47 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:20.484 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.484 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:08:20.744 Malloc0 00:08:20.744 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.744 17:24:47 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:20.744 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.744 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:08:20.744 Delay0 00:08:20.744 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.744 17:24:47 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:20.744 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.744 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:08:20.744 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.744 17:24:47 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:20.744 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.744 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:08:20.744 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.744 17:24:47 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:20.744 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.744 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:08:20.744 [2024-04-18 17:24:47.070585] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:20.744 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.744 17:24:47 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:20.744 17:24:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.744 17:24:47 -- common/autotest_common.sh@10 -- # set +x 00:08:20.744 17:24:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.744 17:24:47 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:20.744 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.744 [2024-04-18 17:24:47.153016] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:23.278 Initializing NVMe Controllers 00:08:23.278 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:23.278 controller IO queue size 128 less than required 00:08:23.278 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:23.278 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:23.278 Initialization complete. Launching workers. 00:08:23.278 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41409 00:08:23.278 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41470, failed to submit 62 00:08:23.278 success 41410, unsuccess 60, failed 0 00:08:23.278 17:24:49 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:23.278 17:24:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.278 17:24:49 -- common/autotest_common.sh@10 -- # set +x 00:08:23.278 17:24:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.278 17:24:49 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:23.278 17:24:49 -- target/abort.sh@38 -- # nvmftestfini 00:08:23.278 17:24:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:23.278 17:24:49 -- nvmf/common.sh@117 -- # sync 00:08:23.278 17:24:49 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:23.278 17:24:49 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:23.278 17:24:49 -- nvmf/common.sh@120 -- # set +e 00:08:23.278 17:24:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:23.278 17:24:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:23.278 rmmod nvme_rdma 00:08:23.278 rmmod nvme_fabrics 00:08:23.278 17:24:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:23.278 17:24:49 -- nvmf/common.sh@124 -- # set -e 00:08:23.278 17:24:49 -- nvmf/common.sh@125 -- # return 0 00:08:23.278 17:24:49 -- nvmf/common.sh@478 -- # '[' -n 1942192 ']' 00:08:23.278 17:24:49 -- nvmf/common.sh@479 -- # killprocess 1942192 00:08:23.278 17:24:49 -- common/autotest_common.sh@936 -- # '[' -z 1942192 ']' 00:08:23.278 17:24:49 -- common/autotest_common.sh@940 -- # kill -0 1942192 00:08:23.278 17:24:49 -- common/autotest_common.sh@941 -- # uname 00:08:23.278 17:24:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:23.278 17:24:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1942192 00:08:23.278 17:24:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:23.278 17:24:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:23.278 17:24:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1942192' 00:08:23.278 killing process with pid 1942192 00:08:23.278 17:24:49 -- common/autotest_common.sh@955 -- # kill 1942192 00:08:23.278 17:24:49 -- common/autotest_common.sh@960 -- # wait 1942192 00:08:23.278 17:24:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:23.278 17:24:49 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:23.278 00:08:23.278 real 0m5.367s 00:08:23.278 user 0m11.581s 00:08:23.278 sys 0m1.889s 00:08:23.278 17:24:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:23.278 17:24:49 -- common/autotest_common.sh@10 -- # set +x 00:08:23.278 ************************************ 00:08:23.278 END TEST nvmf_abort 00:08:23.278 ************************************ 00:08:23.278 17:24:49 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:23.278 17:24:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:23.278 17:24:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.278 17:24:49 -- common/autotest_common.sh@10 -- # set +x 00:08:23.536 ************************************ 00:08:23.536 START TEST nvmf_ns_hotplug_stress 00:08:23.536 ************************************ 00:08:23.536 17:24:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:08:23.536 * Looking for test storage... 00:08:23.536 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:23.536 17:24:49 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.536 17:24:49 -- nvmf/common.sh@7 -- # uname -s 00:08:23.536 17:24:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.536 17:24:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.536 17:24:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.536 17:24:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.536 17:24:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.536 17:24:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.536 17:24:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.536 17:24:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.536 17:24:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.536 17:24:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.536 17:24:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.536 17:24:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.536 17:24:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.536 17:24:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.536 17:24:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.536 17:24:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.537 17:24:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:23.537 17:24:49 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.537 17:24:49 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.537 17:24:49 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.537 17:24:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.537 17:24:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.537 17:24:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.537 17:24:49 -- paths/export.sh@5 -- # export PATH 00:08:23.537 17:24:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.537 17:24:49 -- nvmf/common.sh@47 -- # : 0 00:08:23.537 17:24:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:23.537 17:24:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:23.537 17:24:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.537 17:24:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.537 17:24:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.537 17:24:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:23.537 17:24:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:23.537 17:24:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:23.537 17:24:49 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:23.537 17:24:49 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:08:23.537 17:24:49 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:23.537 17:24:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.537 17:24:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:23.537 17:24:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:23.537 17:24:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:23.537 17:24:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.537 17:24:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.537 17:24:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.537 17:24:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:23.537 17:24:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:23.537 17:24:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:23.537 17:24:49 -- common/autotest_common.sh@10 -- # set +x 00:08:25.445 17:24:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:25.445 17:24:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.445 17:24:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.445 17:24:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.445 17:24:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.445 17:24:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.445 17:24:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.445 17:24:51 -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.445 17:24:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.445 17:24:51 -- nvmf/common.sh@296 -- # e810=() 00:08:25.445 17:24:51 -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.445 17:24:51 -- nvmf/common.sh@297 -- # x722=() 00:08:25.445 17:24:51 -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.445 17:24:51 -- nvmf/common.sh@298 -- # mlx=() 00:08:25.445 17:24:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.445 17:24:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.445 17:24:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.445 17:24:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.445 17:24:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.445 17:24:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.445 17:24:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.445 17:24:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.445 17:24:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.445 17:24:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.445 17:24:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.445 17:24:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.445 17:24:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.445 17:24:51 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:25.445 17:24:51 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:25.445 17:24:51 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:25.445 17:24:51 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:25.445 17:24:51 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:25.445 17:24:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.445 17:24:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.445 17:24:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:08:25.445 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:08:25.445 17:24:51 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:25.445 17:24:51 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:25.445 17:24:51 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:25.445 17:24:51 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:25.445 17:24:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.445 17:24:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:08:25.445 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:08:25.445 17:24:51 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:25.445 17:24:51 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:25.445 17:24:51 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:08:25.445 17:24:51 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:25.445 17:24:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.445 17:24:51 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:25.445 17:24:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.445 17:24:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.445 17:24:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:25.445 17:24:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.445 17:24:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:08:25.445 Found net devices under 0000:09:00.0: mlx_0_0 00:08:25.445 17:24:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.445 17:24:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.445 17:24:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.445 17:24:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:25.445 17:24:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.445 17:24:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:08:25.445 Found net devices under 0000:09:00.1: mlx_0_1 00:08:25.445 17:24:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.445 17:24:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:25.446 17:24:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:25.446 17:24:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:25.446 17:24:51 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:25.446 17:24:51 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:25.446 17:24:51 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:25.446 17:24:51 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:25.446 17:24:51 -- nvmf/common.sh@58 -- # uname 00:08:25.446 17:24:51 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:25.446 17:24:51 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:25.446 17:24:51 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:25.446 17:24:51 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:25.446 17:24:51 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:25.446 17:24:51 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:25.446 17:24:51 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:25.446 17:24:51 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:25.446 17:24:51 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:25.446 17:24:51 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:25.446 17:24:51 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:25.446 17:24:51 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:25.446 17:24:51 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:25.446 17:24:51 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:25.446 17:24:51 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:25.446 17:24:51 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:25.446 17:24:51 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:25.446 17:24:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.446 17:24:51 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:25.446 17:24:51 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:25.446 17:24:51 -- nvmf/common.sh@105 -- # continue 2 00:08:25.446 17:24:51 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:25.446 17:24:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.446 17:24:51 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:25.446 17:24:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.446 17:24:51 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:25.446 17:24:51 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:25.446 17:24:51 -- nvmf/common.sh@105 -- # continue 2 00:08:25.446 17:24:51 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:25.446 17:24:51 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:25.446 17:24:51 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:25.446 17:24:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:25.446 17:24:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:25.446 17:24:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:25.446 17:24:51 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:25.446 17:24:51 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:25.446 17:24:51 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:25.446 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:25.446 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:08:25.446 altname enp9s0f0np0 00:08:25.446 inet 192.168.100.8/24 scope global mlx_0_0 00:08:25.446 valid_lft forever preferred_lft forever 00:08:25.446 17:24:51 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:25.446 17:24:51 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:25.446 17:24:51 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:25.446 17:24:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:25.446 17:24:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:25.446 17:24:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:25.446 17:24:51 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:25.446 17:24:51 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:25.446 17:24:51 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:25.446 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:25.446 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:08:25.446 altname enp9s0f1np1 00:08:25.446 inet 192.168.100.9/24 scope global mlx_0_1 00:08:25.446 valid_lft forever preferred_lft forever 00:08:25.446 17:24:51 -- nvmf/common.sh@411 -- # return 0 00:08:25.446 17:24:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:25.446 17:24:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:25.446 17:24:51 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:25.446 17:24:51 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:25.446 17:24:51 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:25.446 17:24:51 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:25.446 17:24:51 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:25.446 17:24:51 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:25.446 17:24:51 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:25.446 17:24:51 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:25.446 17:24:51 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:25.446 17:24:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.446 17:24:51 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:25.446 17:24:51 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:25.446 17:24:51 -- nvmf/common.sh@105 -- # continue 2 00:08:25.446 17:24:51 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:25.446 17:24:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.446 17:24:51 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:25.446 17:24:51 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.446 17:24:51 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:25.446 17:24:51 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:25.446 17:24:51 -- nvmf/common.sh@105 -- # continue 2 00:08:25.446 17:24:51 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:25.446 17:24:51 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:25.446 17:24:51 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:25.446 17:24:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:25.446 17:24:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:25.446 17:24:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:25.446 17:24:51 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:25.446 17:24:51 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:25.446 17:24:51 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:25.446 17:24:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:25.446 17:24:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:25.446 17:24:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:25.446 17:24:51 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:25.446 192.168.100.9' 00:08:25.446 17:24:51 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:25.446 192.168.100.9' 00:08:25.446 17:24:51 -- nvmf/common.sh@446 -- # head -n 1 00:08:25.446 17:24:51 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:25.446 17:24:51 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:25.446 192.168.100.9' 00:08:25.446 17:24:51 -- nvmf/common.sh@447 -- # tail -n +2 00:08:25.446 17:24:51 -- nvmf/common.sh@447 -- # head -n 1 00:08:25.446 17:24:51 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:25.446 17:24:51 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:25.446 17:24:51 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:25.446 17:24:51 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:25.446 17:24:51 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:25.446 17:24:51 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:25.446 17:24:51 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:08:25.446 17:24:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:25.446 17:24:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:25.446 17:24:51 -- common/autotest_common.sh@10 -- # set +x 00:08:25.446 17:24:51 -- nvmf/common.sh@470 -- # nvmfpid=1944270 00:08:25.446 17:24:51 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:25.446 17:24:51 -- nvmf/common.sh@471 -- # waitforlisten 1944270 00:08:25.446 17:24:51 -- common/autotest_common.sh@817 -- # '[' -z 1944270 ']' 00:08:25.446 17:24:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.446 17:24:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:25.446 17:24:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.446 17:24:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:25.446 17:24:51 -- common/autotest_common.sh@10 -- # set +x 00:08:25.446 [2024-04-18 17:24:51.977627] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:08:25.446 [2024-04-18 17:24:51.977727] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.705 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.705 [2024-04-18 17:24:52.036067] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:25.705 [2024-04-18 17:24:52.145365] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.705 [2024-04-18 17:24:52.145435] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.705 [2024-04-18 17:24:52.145451] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.705 [2024-04-18 17:24:52.145463] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.705 [2024-04-18 17:24:52.145474] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.705 [2024-04-18 17:24:52.145532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.705 [2024-04-18 17:24:52.145649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.705 [2024-04-18 17:24:52.145652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.964 17:24:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:25.964 17:24:52 -- common/autotest_common.sh@850 -- # return 0 00:08:25.964 17:24:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:25.964 17:24:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:25.964 17:24:52 -- common/autotest_common.sh@10 -- # set +x 00:08:25.964 17:24:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.964 17:24:52 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:08:25.964 17:24:52 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:26.222 [2024-04-18 17:24:52.556986] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x235a380/0x235e870) succeed. 00:08:26.222 [2024-04-18 17:24:52.567330] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x235b8d0/0x239ff00) succeed. 00:08:26.222 17:24:52 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:26.480 17:24:52 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:26.737 [2024-04-18 17:24:53.163641] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:26.738 17:24:53 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:26.995 17:24:53 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:27.252 Malloc0 00:08:27.252 17:24:53 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:27.509 Delay0 00:08:27.509 17:24:53 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.766 17:24:54 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:28.022 NULL1 00:08:28.022 17:24:54 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:28.280 17:24:54 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=1944685 00:08:28.280 17:24:54 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:28.280 17:24:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:28.280 17:24:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.280 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.665 Read completed with error (sct=0, sc=11) 00:08:29.666 17:24:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.666 17:24:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:08:29.666 17:24:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:29.924 true 00:08:29.924 17:24:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:29.924 17:24:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.861 17:24:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:30.861 17:24:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:08:30.861 17:24:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:31.119 true 00:08:31.119 17:24:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:31.119 17:24:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.059 17:24:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.059 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.317 17:24:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:08:32.317 17:24:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:32.317 true 00:08:32.577 17:24:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:32.577 17:24:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.147 17:24:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.405 17:24:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:08:33.405 17:24:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:33.663 true 00:08:33.664 17:25:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:33.664 17:25:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.599 17:25:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.858 17:25:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:08:34.858 17:25:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:35.115 true 00:08:35.115 17:25:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:35.115 17:25:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.049 17:25:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.049 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.307 17:25:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:08:36.307 17:25:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:36.307 true 00:08:36.307 17:25:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:36.307 17:25:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.241 17:25:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:37.500 17:25:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:08:37.500 17:25:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:37.792 true 00:08:37.793 17:25:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:37.793 17:25:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.359 17:25:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.617 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.876 17:25:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:08:38.876 17:25:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:38.876 true 00:08:38.876 17:25:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:38.876 17:25:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.135 17:25:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.393 17:25:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:08:39.393 17:25:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:39.651 true 00:08:39.651 17:25:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:39.651 17:25:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.586 17:25:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:40.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:40.865 [2024-04-18 17:25:07.208658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.208753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.208801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.208843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.208885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.208927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.208972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.209976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.210963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.211011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.211056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.211099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.211141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.211184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.865 [2024-04-18 17:25:07.211233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.211276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.211319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.211394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.211446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.211490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.211537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.211582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.211635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.211694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.211741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.211915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.211979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.212975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.213999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.214984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.215966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.216012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.866 [2024-04-18 17:25:07.216055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.216890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.217955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.218982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.219990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.220033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.220087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.220133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.220175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.867 [2024-04-18 17:25:07.220224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.220268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.220466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.220532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.220581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.220627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.220687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.220737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.220783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.220826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.220869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.220917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.220961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.221973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.222987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.223997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.224047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.224089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.224133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.224178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.224220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.224261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.868 [2024-04-18 17:25:07.224308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.224352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.224423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.224473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.224519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.224567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.224611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.224657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.224720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.224764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.224811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.224855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.224903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.224947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.224991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.225999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.226998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.227977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.228022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.228066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.228110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.228151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.228194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.228239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.228286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.228330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.228396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 17:25:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:08:40.869 [2024-04-18 17:25:07.228441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.228495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.228542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.228587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.228635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.228702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 17:25:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:40.869 [2024-04-18 17:25:07.228897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.869 [2024-04-18 17:25:07.228959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.229976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.230971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.870 [2024-04-18 17:25:07.231898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.231945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.231988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.232031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:40.871 [2024-04-18 17:25:07.232221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.232279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.232325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.232395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.232458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.232508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.232552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.232598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.232645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.232708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.232769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.232811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.232853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.232892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.232937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.232979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.233973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.234996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.235998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.871 [2024-04-18 17:25:07.236039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.236977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.237969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.238013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.238055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.238099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.238144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.238189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.238233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.238279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.238321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.238509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.238561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.238617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.238663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.238723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.238766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.238810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.238856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.239992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.240033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.240077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.240122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.240169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.240212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.240259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.872 [2024-04-18 17:25:07.240311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.240372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.240446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.240496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.240545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.240747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.240810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.240861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.240906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.240954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.240996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.241993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.242972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.243921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.244089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.244148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.244198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.244241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.244286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.244329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.244395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.244442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.873 [2024-04-18 17:25:07.244489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.244539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.244585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.244634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.244696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.244744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.244788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.244836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.244879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.244927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.244970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.245970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.246961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.247989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.248922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.249097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.874 [2024-04-18 17:25:07.249156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.249988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.250986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.251993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.875 [2024-04-18 17:25:07.252952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.252996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.253952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.254958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.255979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.256024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.256085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.256131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.256176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.256224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.256270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.256318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.256379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.256439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.256485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.256529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.256575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.256621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.876 [2024-04-18 17:25:07.256671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.256734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.256783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.256841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.256889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.256933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.256981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.257975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.258973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.259983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.260847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.261040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.261098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.261149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.261193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.261244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.261290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.261334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.261406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.877 [2024-04-18 17:25:07.261455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.261504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.261550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.261599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.261643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.261702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.261752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.261797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.261846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.261889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.261936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.261980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.262989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.263960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.264969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.265993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.266158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.266220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.878 [2024-04-18 17:25:07.266274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.266318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.266388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.266437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.266493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.266542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.266589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.266633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.266694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.266759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.266804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.266848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.266893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.266937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.266982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.267998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.268956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.269973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.879 [2024-04-18 17:25:07.270951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.270996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.271978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.272965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.273994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.274987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.275032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.275074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.275118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.275167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.275210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.275253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.275297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.275340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.275414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.275465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.275509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.275558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.275609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.275669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.880 [2024-04-18 17:25:07.275732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.275777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.275825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.275869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.275915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.275958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.276968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.277862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.278962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.279006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.279051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.279094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.279144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.279189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.279240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.279283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.279326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.279398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.279452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.279500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.881 [2024-04-18 17:25:07.279549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.279742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.279802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.279848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.279892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.279933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.279974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.280990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.281979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.282935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.283976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.284019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.284065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.284110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.284153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.882 [2024-04-18 17:25:07.284197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.284241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.284287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.284332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.284400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.284448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.284496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.284542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.284587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.284635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:40.883 [2024-04-18 17:25:07.284837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.284895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.284942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.284986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.285981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.286962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.287978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.883 [2024-04-18 17:25:07.288954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.289981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.290996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.291995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.292974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.293967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.884 [2024-04-18 17:25:07.294010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.294834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.295986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.296991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.297997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.298964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.299008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.885 [2024-04-18 17:25:07.299053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.299888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.300963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.301966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.302983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.303030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.303073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.303118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.303159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.303200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.303246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.886 [2024-04-18 17:25:07.303292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.303491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.303559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.303608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.303657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.303715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.303760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.303805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.303854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.303905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.303948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.303994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.304962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.305972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.306990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.307986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.308034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.308081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.308126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.308173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.308219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.308262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.308308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.308354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.308558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.887 [2024-04-18 17:25:07.308625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.308689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.308748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.308791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.308837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.308887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.308933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.308981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.309991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.310987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.311978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.312967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.313012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.313055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.313101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.313144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.313188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.313235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.313279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.313326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.313394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.313441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.313487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.313679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.888 [2024-04-18 17:25:07.313740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.313786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.313832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.313876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.313920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.313962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.314967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.315996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.316844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.317980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.318956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.319002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.319047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.889 [2024-04-18 17:25:07.319095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.319991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.320982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.321844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.322989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.323035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.323078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.323123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.323168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.323211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.323256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.323298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.323343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.323410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.323457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.323511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.323681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.323761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.323807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.890 [2024-04-18 17:25:07.323854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.323902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.323951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.323996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.324959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.325996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.326968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.327997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.328042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.328087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.328137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.328181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.328224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.328275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.328319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.328364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.328418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.328465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.328670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.328750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.328815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.328865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.328911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.328955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.329001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.329046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.329095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.329142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.329190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.891 [2024-04-18 17:25:07.329236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.329281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.329324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.329371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.329426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.329472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.329518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.329566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.329610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.329657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.329708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.329755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.329797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.329843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.329890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.329938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.329985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.330980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.331891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.332964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.333981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.334986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.335031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.892 [2024-04-18 17:25:07.335085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.335131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.335183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.335230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.335276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.335324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.335518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.335588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.335636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.335685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.335734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.335784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.335831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.335895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.335957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.336960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:40.893 [2024-04-18 17:25:07.337277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.337984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.338985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.339973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.340951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.341001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.341053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.341101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.893 [2024-04-18 17:25:07.341145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.341189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.341234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.341301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.341364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.341424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.341472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.341520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.341565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.341612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.341658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.341708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.341754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.341803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.341850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.341901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.341952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.342973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.343987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.344995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.345958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.346010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.346058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.346106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.346152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.346200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.346247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.346299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.346345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.346396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.894 [2024-04-18 17:25:07.346443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.346491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.346539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.346590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.346636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.346680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.346728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.346778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.346825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.346872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.346917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.346968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.347977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.348954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.349971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.350826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.351981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.352027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.352073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.352120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.352166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.352211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.352264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.895 [2024-04-18 17:25:07.352310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.352358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.352414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.352464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.352513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.352692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.352757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.352804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.352851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.352898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.352944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.352993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.353957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.354963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.355932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.356989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.357976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.358025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.358073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.358120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.358168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.358216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.358260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.358306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.358352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.358411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.358460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.358512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.896 [2024-04-18 17:25:07.358562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.358609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.358658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.358705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.358754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.358801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.358852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.358900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.358949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.358994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.359996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.360960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.361981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.362994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.363967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.897 [2024-04-18 17:25:07.364978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.365981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.366965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.367830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.368957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.369984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.370031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.370077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.370126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.370173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.370219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.370264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.370312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.370359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.370415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.898 [2024-04-18 17:25:07.370463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.370509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.370560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.370605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.370657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.370709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.370755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.370805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.370851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.370900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.370948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.370996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.371984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.372964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.373970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.374957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.375972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.376020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.376073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.376125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.376171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.376222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.376269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.376316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.376362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.376564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.376630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.376684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.376742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.376805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.376864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.376917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.376964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.377012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.377063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.899 [2024-04-18 17:25:07.377108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.377964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.378999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.379884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.380999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.381046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.381107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.381156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.381206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.381256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.381305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.381353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.381408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.381463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.381511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.381557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.381601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.381789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.381872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.381921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.381968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.382958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.383962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.900 [2024-04-18 17:25:07.384013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.384983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.385031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.385081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.385128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.385310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.385372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.385433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.385483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.385532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.385582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.385628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.385680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.385736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:40.901 [2024-04-18 17:25:07.385783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.385841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.385888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.385942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.385991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.386862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.387976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.388965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.185 [2024-04-18 17:25:07.389790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.389841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.389888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.389936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.389981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.390986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.391994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.392041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:41.186 [2024-04-18 17:25:07.392217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.392281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.392330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.392378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.392432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.392489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.392536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.392586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.392631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.392684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.392734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.392780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.392826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.392874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.392920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.392972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.393954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.394020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.394070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.394118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.394164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.394211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.394262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.394312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.394360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.394413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.394465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.394515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.394564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.394611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.186 [2024-04-18 17:25:07.394657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.394719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.394783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.394832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.394877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.394922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.394970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.395972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.396964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.397974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.398960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.399012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.399193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.399255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.399307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.399356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.399412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.399463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.399510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.187 [2024-04-18 17:25:07.399562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.399606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.399653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.399697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.399744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.399792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.399839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.399885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.399931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.399982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.400965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.401969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.402956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.403978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.404026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.404073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.404121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.404305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.188 [2024-04-18 17:25:07.404375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.404433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.404481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.404529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.404578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.404631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.404678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.404728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.404777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.404829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.404876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.404928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.404976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.405876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.406999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.407966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.189 [2024-04-18 17:25:07.408937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.408983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.409031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.409078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.409143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.409204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.409249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.409294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.409490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.409551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.409604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.409673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.409718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.409764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.409812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.409861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.409907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.409955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.410992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.411964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.190 [2024-04-18 17:25:07.412991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.413956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.414974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.415998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.416994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.417047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.417096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.417143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.417194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.417240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.417289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.417335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.417390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.417440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.417490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.417535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.417580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.417625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.191 [2024-04-18 17:25:07.417673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.417719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.417766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.417817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.417862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.417909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.418962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.419981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.420999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.421982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.422027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.422072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.422119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.422166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.422216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.422261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.422312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.422357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.422412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.192 [2024-04-18 17:25:07.422461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.422508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.422553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.422600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.422647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.422698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.422752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.422799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.422852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.422901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.422953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.422998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.423995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.424958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.425990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.426977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.427028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.427073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.427123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.427174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.193 [2024-04-18 17:25:07.427220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.427271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.427315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.427360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.427416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.427465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.427513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.427560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.427605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.427649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.427696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.427742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.427791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.427838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.427883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.427931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.427976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.428991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.429835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.430973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.431976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.432023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.194 [2024-04-18 17:25:07.432070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.432995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.433969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.434967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.435015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.435197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.435259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.195 [2024-04-18 17:25:07.435310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.435359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.435418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.435467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.435514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.435559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.435613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.435659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.435708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.435755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.435805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.435852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.435902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.435956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.436958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.437970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.196 [2024-04-18 17:25:07.438945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.438991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.439995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.440966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.441991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.442055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.442105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.442149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.442194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.442242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.442289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.442335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.442390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.442441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.442489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.197 [2024-04-18 17:25:07.442540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.442596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.442641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.442689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.442736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.442782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.442829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.442895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.442953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.443967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.444975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:41.198 [2024-04-18 17:25:07.445507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.445995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.446042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.446106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.198 [2024-04-18 17:25:07.446167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.446213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.446259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.446304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.446349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.446406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.446464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.446510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.446561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.446609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.446656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.446708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.446761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.446809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.446856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.446907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.446955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.447984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.448990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.449953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.450003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.199 [2024-04-18 17:25:07.450052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.450098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.450147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.450195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.450242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.450289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.450336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.450393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.450443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.450492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.450540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.450711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.450776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.450824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.450870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.450916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.450968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.451985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.452976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.453949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.454127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.200 [2024-04-18 17:25:07.454196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.454253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.454299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.454352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.454406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.454458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.454505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.454548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.454596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.454643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.454690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.454737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.454785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.454831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.454880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.454931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.454977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.455951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.456966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.201 [2024-04-18 17:25:07.457019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.457065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.457115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.457176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.457240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.457287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.457335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.457387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.457568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.457631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.457679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.457725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.457774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.457823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.457885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.457946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.457989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.458968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.459982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.202 [2024-04-18 17:25:07.460795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.460974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.461955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.462964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.463992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.464038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.464084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.464129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.464174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.464348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.464438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.464492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.203 [2024-04-18 17:25:07.464538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.464584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.464633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.464695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.464743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.464788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.464834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.464878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.464925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.464969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.465892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.466968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.467986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.468029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.468075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.468120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.468165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.468211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.468255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.468301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.468344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.468417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.468467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.468516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.468564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.204 [2024-04-18 17:25:07.468610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.468655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.468723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.468771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.468816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.468862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.468908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.468954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 true 00:08:41.205 [2024-04-18 17:25:07.469938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.469985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.470991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.471980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.472026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.472071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.472114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.472161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.472207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.205 [2024-04-18 17:25:07.472257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.472302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.472344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.472414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.472460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.472507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.472554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.472601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.472649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.472723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.472771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.472938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.473969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.474990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.475995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.476040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.476084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.476129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.206 [2024-04-18 17:25:07.476178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.476373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.476446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.476495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.476541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.476593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.476638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.476701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.476748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.476800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.476846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.476890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.476935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.476977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.477912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.478981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.479025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.479069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.479115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.207 [2024-04-18 17:25:07.479162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.479204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.479247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.479293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.479341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.479410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.479462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.479509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.479556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.479604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.479651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.479839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.479901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.479952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.479997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.480963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.481999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.482966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.483011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.483058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.483233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.483296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.483344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.483421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.483471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.483527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.483575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.483623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.483669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.208 [2024-04-18 17:25:07.483727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 17:25:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:41.209 [2024-04-18 17:25:07.483769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.483815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.483859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 17:25:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.209 [2024-04-18 17:25:07.483903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.483947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.483991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.484943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.485978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.486999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.487045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.487086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.487131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.487176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.487220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.487263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.487306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.487349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.487417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.487466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.487512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.487558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.487604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.487651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.209 [2024-04-18 17:25:07.487712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.487760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.487805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.487850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.487895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.487940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.487984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.488986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.489845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.490977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.491019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.491063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.491105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.491149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.491195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.491240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.491289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.491336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.491402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.491448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.491492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.491685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.491747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.491796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.491840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.210 [2024-04-18 17:25:07.491885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.491937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.491984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.492974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.493980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.494850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.495962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.496005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.496049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.211 [2024-04-18 17:25:07.496094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.496140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.496186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.496230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.496276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.496321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.496389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.496438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.496488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.496542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.496753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.496818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.496866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.496912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.496953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.496997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.497987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.498032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.498074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.498118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.498161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.498205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.498250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:41.212 [2024-04-18 17:25:07.498445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.498513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.498562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.498609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.498655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.498718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.498763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.498812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.498861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.498906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.498955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.499979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.500144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.500206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.500252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.500294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.500335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.500405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.500471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.500517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.500569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.500619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.500679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.500741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.500784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.500830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.500873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.212 [2024-04-18 17:25:07.500918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.500963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.501999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.502990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.503975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.504964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.505007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.505050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.505093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.505135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.505304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.505379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.505442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.505489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.505538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.505586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.505632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.505696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.213 [2024-04-18 17:25:07.505745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.505788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.505832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.505879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.505926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.505968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.506833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.507973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.508984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.509975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.510017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.510067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.510113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.510159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.510200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.510242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.510456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.214 [2024-04-18 17:25:07.510525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.510573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.510621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.510675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.510740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.510785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.510828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.510874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.510917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.510962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.511966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.512995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.513974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.514959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.515001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.515046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.515089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.515138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.515183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.515227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.515270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.215 [2024-04-18 17:25:07.515317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.515514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.515581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.515628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.515691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.515735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.515779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.515827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.515868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.515910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.515951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.516994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.517959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.518988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.519997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.520966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.521012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.521054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.521096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.216 [2024-04-18 17:25:07.521139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.521182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.521226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.521272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.521318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.521358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.521430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.521477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.521528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.521571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.521613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.521661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.521729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.521774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.521816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.521863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.521905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.522987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.523979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.217 [2024-04-18 17:25:07.524782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.524827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.524871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.524913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.524959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.525988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.526827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.527993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.528966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.529958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.530001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.530044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.218 [2024-04-18 17:25:07.530095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.530139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.530182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.530354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.530444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.530496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.530543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.530591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.530635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.530707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.530753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.530795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.530840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.530889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.530934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.530978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.531897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.532964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.533968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.534989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.535961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.536005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.536053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.536095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.536141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.536186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.536233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.536276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.536324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.536390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.219 [2024-04-18 17:25:07.536437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.536482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.536530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.536573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.536620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.536691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.536737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.536783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.536824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.536869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.536913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.536956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.537971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.538984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.539966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.540963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.541007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.541054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.541098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.541142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.541185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.541230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.541273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.541321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.541379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.541437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.541482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.541529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.220 [2024-04-18 17:25:07.541573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.541620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.541679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.541726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.541775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.541822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.541864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.541916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.541961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.542957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.543977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.544995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.545970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.546017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.546060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.546106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.546150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.546198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.546243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.546284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.546329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.221 [2024-04-18 17:25:07.546397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.546450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.546494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.546542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.546588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.546632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.546694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.546737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.546783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.546826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.546871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.546921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.546967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.547988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.548970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.549994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.550039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.550083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.550130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.550173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.550218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.550273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.550315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.550375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.550442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:41.222 [2024-04-18 17:25:07.550623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.550701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.550749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.550796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.550844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.550888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.550933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.550979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.551973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.552017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.552061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.552111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.552281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.552343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.552424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.552471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.552516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.552562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.552605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.552652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.552698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.552758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.552803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.552846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.222 [2024-04-18 17:25:07.552889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.552933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.552978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.553966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.554977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.555973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.556992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.557964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.558015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.223 [2024-04-18 17:25:07.558058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.558987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.559971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.560995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.561959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.562959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.563004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.563046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.563089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.563132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.563177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.563220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.563263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.224 [2024-04-18 17:25:07.563308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.563351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.563420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.563469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.563515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.563565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.563611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.563673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.563721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.563769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.563817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.563863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.563912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.564993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.565990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.566971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.567952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.568952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.569136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.569194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.569241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.569283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.569334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.569407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.225 [2024-04-18 17:25:07.569455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.569504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.569554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.569597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.569642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.569701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.569752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.569796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.569842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.569884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.569930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.569973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.570981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.571994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.572977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.573970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.574967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.575973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.576019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.576063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.576105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.576150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.576192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.576234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.576279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.576325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.576398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.576444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.576492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.226 [2024-04-18 17:25:07.576536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.576586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.576632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.576697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.576746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.576795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.576838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.576881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.576927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.576971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.577962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.578965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.579979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.580971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.581974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.582967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.583010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.583059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.583105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.583152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.583199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.583244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.583286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.583331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.227 [2024-04-18 17:25:07.583399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.583444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.583489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.583539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.583587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.583634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.583695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.583742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.583786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.583829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.583870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.583913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.583958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.584957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.585964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.586964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.587977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.588986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.589030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.228 [2024-04-18 17:25:07.589077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.589246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.589303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.589350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.589419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.589485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.589535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.589585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.589630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.589694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.589740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.589802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.589848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.589895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.589940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.589989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.590984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.591956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.592982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.593027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.593078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.593122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.593165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.593207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.593254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.593299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.593346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.593417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.593472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.593522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.593569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.593614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.229 [2024-04-18 17:25:07.593675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.593722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.593774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.593826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.593868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.593914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.593958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.594957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.595891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.596969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.597972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.598974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.599973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.600017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.600061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.600105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.600151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.600197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.600247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.600291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.600338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.600412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.600464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.600518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.230 [2024-04-18 17:25:07.600566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.600620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.600665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.600737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.600782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.600829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.600876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.600922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.600965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.601964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.602962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.603976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.604029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.604074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.604120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.604164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.604211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.604271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.604318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.604396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.604445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.604497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:41.231 [2024-04-18 17:25:07.604690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.604756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.604801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.604847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.604890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.604941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.604988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.605962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.606957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.607925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.608106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.608169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.608218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.608268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.608313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.608358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.231 [2024-04-18 17:25:07.608435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.608482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.608530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.608577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.608627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.608689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.608739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.608789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.608835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.608880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.608924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.608969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.609969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.610998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.611049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.611094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.611139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.611184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.611233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.611277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.611326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.611404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.611609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.611698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.611750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.611796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.611845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.611890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.611938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.611985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.232 [2024-04-18 17:25:07.612036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.612964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.613996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.614882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.615053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.615117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.615166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.615212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.615258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.615304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.615350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.615425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.615475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.615523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.615569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.615614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.233 [2024-04-18 17:25:07.615680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.615732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.615779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.615823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.615868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.615916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.615967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.616965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.617978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.618982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.619986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.620974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.621023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.621068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.621115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.621160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.621209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.621256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.234 [2024-04-18 17:25:07.621300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.621343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.621423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.621477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.621527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.621575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.621623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.621670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.621735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.621782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.621830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.621879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.622971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.623962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.624955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.625995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.626043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.626090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.626135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.626182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.626228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.626271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.626316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.626367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.626451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.626498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.626547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.235 [2024-04-18 17:25:07.626592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.626640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.626702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.626751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.626795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.626843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.626887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.626933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.626979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.627031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.627084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.627263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.627324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.627397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.627449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.627500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.627548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.627596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.627643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.627715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.627773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.627820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.627866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.627912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.627957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.628848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.629974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.630999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.631967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.632013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.632059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.632106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.632153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.236 [2024-04-18 17:25:07.632201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.632248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.632293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.632343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.632549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.632612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.632681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.632727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.632773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.632817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.632863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.632909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.632955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.632999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.633972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.634983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.635997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.636956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.637003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.637050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.637098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.637145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.637190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.637234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.637280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.237 [2024-04-18 17:25:07.637327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.637398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.637446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.637498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.637542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.637734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.637800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.637850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.637896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.637950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.637996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.638973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.639967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.640998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.641169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.641240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.641287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.641336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.641404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.641458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.641508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.641554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.641603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.641647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.641715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.641761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.641807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.238 [2024-04-18 17:25:07.641853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.641901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.641946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.641992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.642999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.643997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.644974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.645971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.646967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.647012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.647058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.647106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.239 [2024-04-18 17:25:07.647151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.647199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.647244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.647289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.647333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.647404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.647452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.647497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.647544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.647597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.647647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.647719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.647763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.647812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.647859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.647904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.647950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.648963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.649996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.650995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.651981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.652027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.652079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.652130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.652175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.652218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.652262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.652310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.652354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.652426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.652474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.652522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.652567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.652615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.652661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.240 [2024-04-18 17:25:07.652726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.652772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.652819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.652865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.652915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.652960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.653004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.653057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.653104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.653159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.653357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.653451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.653506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.653555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.653602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.653654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.653729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.653776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.653823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.653869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.653917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.653967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.654951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.655997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.656976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.657981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.658027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.658073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.658116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.658162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.658206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.658252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.658299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.658348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.241 [2024-04-18 17:25:07.658419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:41.242 [2024-04-18 17:25:07.658600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.658665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.658736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.658783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.658832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.658877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.658926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.658969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.659971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.660966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.661912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.662972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.663017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.663062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.663110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.242 [2024-04-18 17:25:07.663155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.663206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.663254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.663302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.663346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.663418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.663472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.663518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.663565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.663610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.663660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.663853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.663915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.663964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.664955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.665986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.666957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.667981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.668028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.668089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.668135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.668187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.668231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.668275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.668320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.668392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.668440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.668494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.668546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.668592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.668638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.668709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.243 [2024-04-18 17:25:07.668754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.668808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.668865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.669956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.670980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.671983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.672030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.672076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.672122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.672171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.672221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.672269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.672313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.672543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.672606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.672660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.672726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.672774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.672820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.672866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.672913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.672957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.673995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.674041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.674085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.674263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.674328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.674401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.674451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.674497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.244 [2024-04-18 17:25:07.674544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.674589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.674637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.674702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.674746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.674790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.674835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.674880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.674924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.674970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.675823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.676953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.677983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.678961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.679984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.245 [2024-04-18 17:25:07.680028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.680970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.681969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.682968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.683959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.684003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.684058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.684103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.684150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.684195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.684243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.684289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.684335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.684404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.684452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.684643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.684734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.684783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.684834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.684880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.246 [2024-04-18 17:25:07.684925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.684968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.685987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.686956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.687892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.688966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.689991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.690036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.690080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.690123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.690168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.690215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.690264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.690313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.690358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.690430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.247 [2024-04-18 17:25:07.690478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.690525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.690570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.690613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.690678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.690727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.690771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.690819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.690863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.690912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.690958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.691968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.692971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.693975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.694985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.695991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.696036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.248 [2024-04-18 17:25:07.696084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.696130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.696175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.696220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.696264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.696308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.696355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.696440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.696492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.696540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.696590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.696786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.696857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.696905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.696973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.697990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.698054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.698114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.698172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.698222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.698279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.698332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.698379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.698455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.698651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.698718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.698768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.698818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.698864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.698928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.698974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.699976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.249 [2024-04-18 17:25:07.700043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.511 [2024-04-18 17:25:07.700091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.700139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.700192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.700238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.700442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.700510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.700561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.700608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.700656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.700732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.700780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.700824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.700869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.700919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.700968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.701977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.702958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.703945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.512 [2024-04-18 17:25:07.704828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.704875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.704923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.704969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.705957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.706942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.707960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.708977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.709024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.709201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.709269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.709328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.709376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.709434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.709479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.709527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.709573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.709623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.709669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.513 [2024-04-18 17:25:07.709716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.709764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.709810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.709856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.709901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.709945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.709993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.710961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.711975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:41.514 [2024-04-18 17:25:07.712584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.712992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.713988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.714038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.714084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.714268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.514 [2024-04-18 17:25:07.714332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.714389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.714437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.714489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.714537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.714585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.714632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.714680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.714726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.714775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.714821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.714872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.714921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.714971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.715971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.716961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.717981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.515 [2024-04-18 17:25:07.718957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.516 [2024-04-18 17:25:07.719005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.516 [2024-04-18 17:25:07.719056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.516 [2024-04-18 17:25:07.719103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.516 [2024-04-18 17:25:07.719152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.516 [2024-04-18 17:25:07.719199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.516 17:25:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:41.516 17:25:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:08:41.516 17:25:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:41.774 [2024-04-18 17:25:08.109993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:41.774 true 00:08:41.774 17:25:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:41.774 17:25:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.030 17:25:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.288 17:25:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:08:42.288 17:25:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:42.546 true 00:08:42.546 17:25:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:42.546 17:25:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.486 17:25:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.744 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.744 17:25:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:08:43.744 17:25:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:44.001 true 00:08:44.002 17:25:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:44.002 17:25:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.940 17:25:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:45.199 17:25:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:08:45.199 17:25:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:45.457 true 00:08:45.457 17:25:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:45.457 17:25:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.025 17:25:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:46.543 17:25:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:08:46.543 17:25:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:46.543 true 00:08:46.543 17:25:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:46.543 17:25:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.478 17:25:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.737 17:25:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:08:47.737 17:25:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:47.997 true 00:08:47.997 17:25:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:47.997 17:25:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.946 17:25:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.946 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.946 17:25:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:08:48.946 17:25:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:49.205 true 00:08:49.205 17:25:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:49.205 17:25:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.145 17:25:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.404 17:25:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:08:50.404 17:25:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:50.662 true 00:08:50.662 17:25:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:50.662 17:25:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.232 17:25:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:51.491 17:25:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:08:51.491 17:25:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:51.749 true 00:08:51.749 17:25:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:51.749 17:25:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.690 17:25:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.948 17:25:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:08:52.948 17:25:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:53.208 true 00:08:53.208 17:25:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:53.208 17:25:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.467 17:25:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.467 17:25:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:08:53.467 17:25:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:53.726 true 00:08:53.726 17:25:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:53.726 17:25:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.658 17:25:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.921 [2024-04-18 17:25:21.377636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.377738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.377784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.377829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.377867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.377915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.377960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.378966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.379020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.379065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.921 [2024-04-18 17:25:21.379109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.379152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.379202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.379325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.379404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.379455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.379506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.379551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.379597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.379648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.379709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.379754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.379799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.379853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.379908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.379954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.379996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.380957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.381997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.382105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.382153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.382197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.382253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.382297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.382341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.382420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.382471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.382521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.382569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.382621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.382672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.382733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.382841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.382901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.382959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.383973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.922 [2024-04-18 17:25:21.384023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.384970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.385975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.386977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.387954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.923 [2024-04-18 17:25:21.388821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.388866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.388964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.389992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.390978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.391972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.392978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.393022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.393065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.393200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.393252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.924 [2024-04-18 17:25:21.393297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.393342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.393416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.393473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.393523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.393567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.393612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.393658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.393727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.393771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.393815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.393862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.393908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.393954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.394984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.395993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.396040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.396085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.396132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.396177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.396222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.396267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.396310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.396356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.396435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.396483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.396598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.925 [2024-04-18 17:25:21.396648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.396729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.396778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.396835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.396882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.396927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.396970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 17:25:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:08:54.926 [2024-04-18 17:25:21.397788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.397919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 17:25:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:54.926 [2024-04-18 17:25:21.397962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.398979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.926 [2024-04-18 17:25:21.399860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.399904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.400924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.401969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.402019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.402069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.402113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.402163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.402207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.402254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.402297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.402341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.927 [2024-04-18 17:25:21.402411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.402459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.402508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.402558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.402606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.402654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.402714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.402817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.402866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.402911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.402957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.403975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.404972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.405016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.405061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.405105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.405223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.405271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.405315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.405359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.405437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.405488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.405539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.405587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.405637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.405698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.405759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.928 [2024-04-18 17:25:21.405804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.405848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.405891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.405938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.405983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.406977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.407954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.408961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.409010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.409054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.929 [2024-04-18 17:25:21.409100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.409984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.410996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.411912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.412020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.412066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.412122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.412169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.412218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.412263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.412315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.412377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.930 [2024-04-18 17:25:21.412436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.412486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.412532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.412578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.412627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.412701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.412750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.412794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.412838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.412883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.412926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.413967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.414994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.415040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.415083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.415129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.415176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.415221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.415267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.415311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.415355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.415428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.415556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.415604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.415651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.415696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.415757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.415802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.931 [2024-04-18 17:25:21.415846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.415891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.415932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.415974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.416986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.417973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.418977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.419023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.419069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.419111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.932 [2024-04-18 17:25:21.419156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.419202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.419245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.419287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.419332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.419404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.419470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.419518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.419566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.419611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.419660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.419719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.419764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.419855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.419900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.419960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.420991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.421983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.422030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.422076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.422123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.422234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.422287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.422347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.422419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.422467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.933 [2024-04-18 17:25:21.422514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.422560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.422603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.422651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.422712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.422756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.422800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.422843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.422886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.422938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.422983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.423959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.424977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.934 [2024-04-18 17:25:21.425020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.425972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:54.935 [2024-04-18 17:25:21.426679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.426985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.427961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.428012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.428057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.428103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.935 [2024-04-18 17:25:21.428146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.428248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.428296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.428342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.428411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.428460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.428507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.428555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.428607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.428654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.428713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.428759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.428808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.428852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.428979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.429992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.430980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.431024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.431068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.431118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.431163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.431206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.431250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.431291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.431332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.431402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.431453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.936 [2024-04-18 17:25:21.431503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.431596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.431641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.431713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.431759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.431805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.431848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.431894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.431940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.431987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.432959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.433998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.434042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.434100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.434145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.434190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.434234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.434278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.434322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.434388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.434435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.434481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.434525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.434569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.434615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.434661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.434723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.937 [2024-04-18 17:25:21.434770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.434814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.434860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.434953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.434998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.435963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.436979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.437992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.438035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.938 [2024-04-18 17:25:21.438078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.438958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.439920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.440977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.441021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.441066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.441113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.441159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.441202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.939 [2024-04-18 17:25:21.441246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.441291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.441336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.441405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.441460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.441506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.441557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.441603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.441723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.441776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.441829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.441873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.441917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.441961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.442961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.443934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.444047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.444093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.444151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.444201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.444245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.444292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.444336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.444408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.444455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.444498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.444543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.940 [2024-04-18 17:25:21.444590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.444637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.444698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.444745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.444793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.444838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.444886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.444932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.445997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.446981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.447024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.447069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.447112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.941 [2024-04-18 17:25:21.447158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.447202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.447247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.447291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.447425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.447473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.447535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.447585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.447632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.447677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.447739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.447784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.447826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.447870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.447913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.447958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.448988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.449977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.450018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.450122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.450174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.450218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.450262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.450307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.450354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.450428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.942 [2024-04-18 17:25:21.450474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.450523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.450567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.450615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.450659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.450719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.450849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.450902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.450946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.450989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.451955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.452988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.453036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.453083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.453133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.453178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.453227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.453275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.453321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.453368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.453484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.453536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.453585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.453631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.453696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.943 [2024-04-18 17:25:21.453742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.453802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.453854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.453898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.453943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.453987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.454958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.455975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.456036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.456087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.456133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.456178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.456225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.456273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.456325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.456370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:54.944 [2024-04-18 17:25:21.456425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.228 [2024-04-18 17:25:21.456475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.228 [2024-04-18 17:25:21.456522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.228 [2024-04-18 17:25:21.456584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.228 [2024-04-18 17:25:21.456633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.228 [2024-04-18 17:25:21.456697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.228 [2024-04-18 17:25:21.456746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.456790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.456837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.456945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.457998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.458963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.459996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.460957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.461005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.461066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.461114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.461241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.461290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.461348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.461414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.461467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.461513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.461561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.461608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.461657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.229 [2024-04-18 17:25:21.461720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.461783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.461828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.461874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.461919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.461963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.462996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.463994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.464961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.465991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.466034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.466077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.466122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.466233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.230 [2024-04-18 17:25:21.466293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.466339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.466407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.466457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.466504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.466548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.466595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.466639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.466702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.466747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.466791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.466837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.466878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.466923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.466969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.467981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.468990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.469967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.470969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.471012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.231 [2024-04-18 17:25:21.471056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.471977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.472895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.473910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.474999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.475047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.475092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.475137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.475178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.475221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.475266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.475310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.475355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.475427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.475472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.475518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.475564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.475678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.475741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.475787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.232 [2024-04-18 17:25:21.475838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.475881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.475928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.475971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.476987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.477932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.478980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.479027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.479088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.479135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.479183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.479227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.479274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.479317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.479376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.479431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.479494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.233 [2024-04-18 17:25:21.479542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.479601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.479737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.479798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:55.234 [2024-04-18 17:25:21.479858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.479905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.479955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.480990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.481953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.482957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.483970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.484012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.484109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.484168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.484217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.484264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.234 [2024-04-18 17:25:21.484308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.484355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.484424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.484468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.484513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.484558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.484604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.484647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.484708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.484752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.484877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.484923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.484966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.485973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.486969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.487022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.487069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.487116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.487173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.487263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.487310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.487355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.487424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.487497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.487555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.487601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.487712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.487789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.487851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.487916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.487964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.488973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.489015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.489058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.489101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.489149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.235 [2024-04-18 17:25:21.489193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.489241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.489283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.489330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.489456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.489510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.489560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.489605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.489650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.489717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.489762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.489810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.489853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.489898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.489941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.489986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.490987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.491984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.492983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.493961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.494007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.236 [2024-04-18 17:25:21.494051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.494981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.495994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.496989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.497999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.498042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.498084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.498130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.498174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.498217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.498262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.498309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.498352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.498421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.498472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.498603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.498655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.498715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.237 [2024-04-18 17:25:21.498767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.498811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.498854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.498901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.498942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.498986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.499993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.500979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.501956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.238 [2024-04-18 17:25:21.502786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.502877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.502922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.502978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.503991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.504955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.505980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.506967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.507025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.507073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.507116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.507161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.507205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.507248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.507292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.507339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.507410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.507460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.239 [2024-04-18 17:25:21.507505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.507551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.507602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.507650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.507710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.507766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.507814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.507909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.507967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.508971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.509969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.510999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.511897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.512025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.512071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.512115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.512161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.512204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.240 [2024-04-18 17:25:21.512250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.512293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.512336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.512405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.512454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.512505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.512551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.512599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.512644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.512705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.512769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.512813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.512854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.512905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.513989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.514960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.515971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.516968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.241 [2024-04-18 17:25:21.517016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.517968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.518960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.519962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.520996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.521042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.521086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.521134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.521177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.521225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.521337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.521409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.521457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.521504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.521548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.521596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.242 [2024-04-18 17:25:21.521642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.521700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.521745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.521790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.521834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.521878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.521919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.522902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.523956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.524981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.525024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.525069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.525116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.525162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.525207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.525250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.525356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.525441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.525489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.525536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.525581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.525627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.525688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.525737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.525783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.243 [2024-04-18 17:25:21.525829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.525872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.525918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.525963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.526998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.527960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.528956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.529972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.530018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.530061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.530108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.530153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.530198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.530244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.530378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.530437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.530497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.244 [2024-04-18 17:25:21.530544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.530591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.530638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.530681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.530738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.530786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.530831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.530878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.530925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.530972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.531989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:55.245 [2024-04-18 17:25:21.532162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.532987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.533995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.534966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.535010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.535057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.535098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.535142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.535184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.245 [2024-04-18 17:25:21.535229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.535281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.535326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.535397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.535535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.535581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.535628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.535688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.535734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.535779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.535822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.535865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.535910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.535955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.536980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.537975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.538982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.246 [2024-04-18 17:25:21.539987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.540973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.541987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.542992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.543987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.544039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.544086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.544129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.544176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.544219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.544267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.544311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.544355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.544430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.544480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.544528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.544575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.544625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.544688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.544751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.247 [2024-04-18 17:25:21.544798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.544843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.544891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.545962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.546979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.547962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.248 [2024-04-18 17:25:21.548950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.549971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.550969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.551987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.552955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.553001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.553048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.553094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.553138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.553186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.553230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.553321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.553394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.553455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.553503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.553552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.553597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.553642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.249 [2024-04-18 17:25:21.553688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.553749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.553798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.553843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.553884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.553923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.554911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.555960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.556959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.557991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.558036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.558081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.558126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.558170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.558211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.558256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.558304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.250 [2024-04-18 17:25:21.558435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.558485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.558530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.558577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.558621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.558681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.558726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.558774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.558816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.558864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.558909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.558954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.559977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.560963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.561971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.562960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.563003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.563050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.251 [2024-04-18 17:25:21.563095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.563960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.564980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.565995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.566983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.252 [2024-04-18 17:25:21.567931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.567976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.568978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.569982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.570998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.253 [2024-04-18 17:25:21.571973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.572990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.573963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.574987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.575986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.576046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.576093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.576138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.576182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.576227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.576272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.576320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.576379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.576437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.576482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.576529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.576583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.576630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.254 [2024-04-18 17:25:21.576692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.576753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.576801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.576847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.576934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.576978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.577959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.578994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.579979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.580982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.581110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.581159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.581209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.581253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.581299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.581345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.581415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.581460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.255 [2024-04-18 17:25:21.581504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.581549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.581594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.581638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.581698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.581742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.581786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.581829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.581876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.581919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.581966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.582981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.583992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.584954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:55.256 [2024-04-18 17:25:21.585493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.585958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.586066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.586111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.256 [2024-04-18 17:25:21.586169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.586216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.586265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.586310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.586357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.586425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.586472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.586517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.586566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.586610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.586658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.586718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.586763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.586809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.586854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.586898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.586947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.587973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.588969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.589987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.257 [2024-04-18 17:25:21.590906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.590958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.591969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.592989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.593961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.594963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.595005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.595052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.595096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.258 [2024-04-18 17:25:21.595141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.595184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.595228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.595270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.595316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.595360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.595427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.595474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.595583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.595634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.595677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.595738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.595786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.595833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.595880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.595929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.595972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.596972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.597982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.598955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.259 [2024-04-18 17:25:21.599917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.599967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.600994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.601967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.602907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.603904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.604006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.604053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.604106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.604151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.604200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.604244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.604291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.604335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.604410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.604472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.604519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.604567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.604617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.260 [2024-04-18 17:25:21.604777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.604826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.604870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.604914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.604966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.605972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.606962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.607974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.608925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.609012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.609056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.609114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.609159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.609203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.609246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.609290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.609334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.261 [2024-04-18 17:25:21.609401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.609451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.609495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.609541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.609592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.609727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.609772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.609831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.609879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.609926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.609970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.610984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.611979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.612949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.613940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.614032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.614090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.614137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.614181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.614226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.614270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.614314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.262 [2024-04-18 17:25:21.614373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.614429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.614475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.614524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.614570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.614617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.614662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.614803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.614852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.614895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.614942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.614989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.615993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.616957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.617967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.263 [2024-04-18 17:25:21.618871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.618914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.618959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.619986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.620966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.621990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 true 00:08:55.264 [2024-04-18 17:25:21.622477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.622970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.623977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.264 [2024-04-18 17:25:21.624023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.624068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.624162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.624212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.624271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.624320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.624364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.624437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.624487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.624535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.624581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.624631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.624679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.624744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.624791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.624903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.624952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.625963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.626976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.627998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.628999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.629046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.629094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.629139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.629188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.629236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.629283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.629396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.629448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.265 [2024-04-18 17:25:21.629493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.629538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.629584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.629632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.629681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.629726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.629772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.629819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.629866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.629909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.629954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.630965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.631972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.632986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.633979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.634953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.266 [2024-04-18 17:25:21.635074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.635955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.636049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.636103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.636165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.636215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 17:25:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:55.267 [2024-04-18 17:25:21.636262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.636309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.636354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.636410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 17:25:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.267 [2024-04-18 17:25:21.636458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.636506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.636560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.636608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.636654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.636771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.636820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.636895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.636959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.637957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.638003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.638048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.638093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:55.267 [2024-04-18 17:25:21.638140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.638185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.638234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.638278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.267 [2024-04-18 17:25:21.638322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.638371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.638426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.638475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.638609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.638661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.638709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.638772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.638834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.638883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.638933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.638982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.639979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.640999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.641999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.268 [2024-04-18 17:25:21.642130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.642979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.643992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.644984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.645968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.646981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.647029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.647073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.647121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.647169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.647219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.647334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.647389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.647453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.647504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.647552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.647602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.647648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.647696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.647745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.647794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.269 [2024-04-18 17:25:21.647842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.647890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.647935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.647987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.648949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.649955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.650980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.651988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.652974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.653021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.653069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.653115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.653165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.653213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.653260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.653313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.653362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.653416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.653462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.653507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.653619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.653669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.653720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.270 [2024-04-18 17:25:21.653788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.653856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.653902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.653945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.653989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.654996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.655989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.656957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.657978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.658973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.271 [2024-04-18 17:25:21.659976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.660997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.661965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.662896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.663940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.664954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.665003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.665049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.665097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.665142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.665188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.665240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.665290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.665337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.272 [2024-04-18 17:25:21.665388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.665433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.665479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.665527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.665572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.665621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.665667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.665777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.665845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.665912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.665957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.666958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.667982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.668954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.669899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.670927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.671025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.671073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.671135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.671183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.671230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.671277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.671325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.671374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.671430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.273 [2024-04-18 17:25:21.671477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.671526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.671573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.671620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.671735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.671785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.671845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.671895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.671942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.671987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.672952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.673976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.674955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.675994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.676991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.677044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.677093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.677144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.677191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.677240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.677285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.677334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.677379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.677436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.677482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.274 [2024-04-18 17:25:21.677526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.677571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.677620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.677665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.677715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.677760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.677806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.677853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.677974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.678970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.679957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.680956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.681001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.681049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.681098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.681150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.681198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.681244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.681294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.681410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.275 [2024-04-18 17:25:21.681457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.681524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.681576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.681622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.681670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.681716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.681763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.681809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.681859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.681905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.681951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.681998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.682993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.683959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.684950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.685014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.685076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.685123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.685169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.685217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.276 [2024-04-18 17:25:21.685263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.685314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.685360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.685415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.685533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.685581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.685642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.685696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.685743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.685792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.685841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.685884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.685932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.685980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.686955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.687978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.688891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.689028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.689077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.689123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.689173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.689217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.689261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.689305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.689353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.689407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.689458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.689505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.689558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.277 [2024-04-18 17:25:21.689603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.689653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.689699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.689745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.689791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.689839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.689886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.689992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.690958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.691981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:55.278 [2024-04-18 17:25:21.692539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.692967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.693959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.694023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.694143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.694191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.694255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.694304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.694355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.694409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.694457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.278 [2024-04-18 17:25:21.694501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.694547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.694596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.694643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.694692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.694736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.694782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.694832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.694879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.694926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.694969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.695018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.695117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.695176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.695242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.695290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.695344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.695400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.695448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.695495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.695541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.695592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.695642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.695693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.695739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.695888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.695939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.696986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.697966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.698972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.699020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.699066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.699111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.699164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.699211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.699256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.699304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.699359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.279 [2024-04-18 17:25:21.699517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.699566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.699615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.699662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.699712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.699758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.699824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.699886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.699934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.699993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.700995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.701993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.702983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.703992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.704044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.704157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.704222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.704284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.704330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.704376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.704434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.704483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.280 [2024-04-18 17:25:21.704531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.704577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.704631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.704677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.704732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.704784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.704905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.704999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.705956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.706986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.707980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.281 [2024-04-18 17:25:21.708961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.709993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.710992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.711955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.712990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.713037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.713086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.713200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.713250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.713298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.713346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.713412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.713462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.713511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.713555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.713602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.713650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.713704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.713750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.713811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.713956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.282 [2024-04-18 17:25:21.714009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.714893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.715983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.716961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.717948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.283 [2024-04-18 17:25:21.718987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.719995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.720908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.721950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.722045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.722091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.722166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.722232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.722282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.722329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.722380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.722433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.722477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.722523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.722569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.722619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.722668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.722809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.722871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.722949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.284 [2024-04-18 17:25:21.723928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.723976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.724998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.725981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.726990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.727958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.728004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.728052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.728098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.728258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.728309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.728356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.728417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.728467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.728513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.728563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.728612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.728655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.285 [2024-04-18 17:25:21.728699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.728743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.728792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.728837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.728900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.728968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.729990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.730943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.731977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.732953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.733006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.286 [2024-04-18 17:25:21.733056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.733119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.733183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.733229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.733277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.733328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.733377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.733434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.733547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.733594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.733657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.733714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.733766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.733827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.733891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.733939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.733990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.734951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.735954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.736997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.737046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.737094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.737234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.737283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.737330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.737379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.737434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.737486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.737535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.737581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.737627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.737683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.737731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.287 [2024-04-18 17:25:21.737779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.737826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.737871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.737931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.737997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.738048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.738094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.738148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.738256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.738324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.738393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.738442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.738487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.738535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.738585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.738634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.738687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.738734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.738779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.738832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.738879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.739998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.740955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.741954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.742001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.742046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.742093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.742148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.742194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.742245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.742293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.742341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.742485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.742552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.742604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.742651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.288 [2024-04-18 17:25:21.742695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.289 [2024-04-18 17:25:21.742741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.289 [2024-04-18 17:25:21.742787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.289 [2024-04-18 17:25:21.742837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.289 [2024-04-18 17:25:21.742882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.289 [2024-04-18 17:25:21.742936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.289 [2024-04-18 17:25:21.742982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.289 [2024-04-18 17:25:21.743036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.289 [2024-04-18 17:25:21.743080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.289 [2024-04-18 17:25:21.743128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.743176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.743230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.743275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.743322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.743457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.743509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.743574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.743622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.743668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.743742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.743806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.743855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.743902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.743949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.743999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.744969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.745014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.745059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.745108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.745203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.745250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.745314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.745369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.745437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.745485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.745532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.745577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.560 [2024-04-18 17:25:21.745631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.745676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.745727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.745777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.745823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.745946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.745994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.746952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:55.561 [2024-04-18 17:25:21.747801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.747996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.748992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.749990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.750036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.750083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.750129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.750177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.750225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.561 [2024-04-18 17:25:21.750271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.750390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.750441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.750488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.750538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.750593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.750640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.750697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.750748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.750797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.750845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.750894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.750945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.750993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.751967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.752996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.753960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.754980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.755027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.755078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.562 [2024-04-18 17:25:21.755124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.755177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.755224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.755277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.755323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.755370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.755427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.755474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.755570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.755618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.755690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.755754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.755818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.755868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.755915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.755965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.756989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.757967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.758989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.759991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.760039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.563 [2024-04-18 17:25:21.760089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.760149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.760210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.760266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.760312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.760358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.760413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.760461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.760515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.760563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.760609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.760655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.760703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.760756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.760804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.760899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.760969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.761997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.762977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.763939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.564 [2024-04-18 17:25:21.764960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.765974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.766953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.767989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.768050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.768100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.768147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.768193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.768240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.768286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.768334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.768387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.768438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.768486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.768539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.768671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.768720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.565 [2024-04-18 17:25:21.768785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.768833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.768880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.768925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.768972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.769958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.770964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.771967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.772096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.566 [2024-04-18 17:25:21.772148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.772211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.772261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.772310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.772371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.772444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.772493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.772542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.772591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.772637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.772687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.772733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.772780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.772827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.772872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.772925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.772972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.773037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.773165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.773228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.773282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.773330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.773388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.773435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.773485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.773534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.773582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.773629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.773675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.773720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.773767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.773815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.773947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.774993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.775039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.775087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.775140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.775189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.775237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.775284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.775332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.775377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.775433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.775481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.775528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.567 [2024-04-18 17:25:21.775655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.775710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.775760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.775808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.775856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.775902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.775952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.776971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.777986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.778979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.568 [2024-04-18 17:25:21.779025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.779985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.780098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.780171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.780235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.780284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.780333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.780390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.780444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.780490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.780542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.780592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.780638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.780689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.780753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.780883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.780933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.780996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.781968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.782033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.782087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.782150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.782214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.782260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.782305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.782353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.782410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.782459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.782508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.569 [2024-04-18 17:25:21.782555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.782669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.782718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.782781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.782834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.782884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.782932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.782985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.783964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.784953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.570 [2024-04-18 17:25:21.785995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.786979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.787028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.787074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.787124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.787260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.787310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.787358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.787414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.787462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.787509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.787561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.787608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.787656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.787705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.787752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.787797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.787844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.788907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.789021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.789072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.789118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.789166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.789229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.789291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.789339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.789395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.789443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.571 [2024-04-18 17:25:21.789493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.789540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.789590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.789639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.789770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.789820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.789887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.789951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.789998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.790973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.791974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.792021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.792069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.572 [2024-04-18 17:25:21.792116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.792170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.792217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.792267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.792314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.792366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.792480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.792529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.792595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.792646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.792694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.792741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.792788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.792837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.792884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.792933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.792979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.793955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.794963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.795014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.795081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.795131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.795178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.795240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.795302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.795354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.795410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.795459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.795507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.795558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.573 [2024-04-18 17:25:21.795604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.795658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.795707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.795753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.795798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.795844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.795893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.795998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.796979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.797997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.798948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.574 [2024-04-18 17:25:21.799008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.799962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.800952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.801980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.802029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.802074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.802123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.802169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.802219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.802266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.802312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.802364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.575 [2024-04-18 17:25:21.802419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.802467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.802514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.802566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.802617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.802665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.802714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.802759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.802805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.802918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:55.576 [2024-04-18 17:25:21.802970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.803971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.804978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.805958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.576 [2024-04-18 17:25:21.806003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.806972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.807965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.808985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.809033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.809079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.809126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.577 [2024-04-18 17:25:21.809174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.809220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.809276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.809324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.809375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.809431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.809477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.809523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.809575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.809624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.809674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.809769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.809831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.809881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.809928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.809974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.810977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.811975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.812023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.812070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.812119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.812169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.812217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.812350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.812409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.812458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.578 [2024-04-18 17:25:21.812508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.812555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.812601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.812648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.812710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.812773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.812821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.812868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.812916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.812965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.813968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.814980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.579 [2024-04-18 17:25:21.815027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.815077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.815189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.815242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.815291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.815337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.815392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.815444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.815492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.815540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.815587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.815637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.815683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.815735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.815783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.815898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.815957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.816979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.817979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.818027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.818074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.818125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.818172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.818217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.818262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.580 [2024-04-18 17:25:21.818309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.818357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.818413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.818460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.818505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.818599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.818650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.818712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.818759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.818823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.818885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.818931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.818980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.819954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.820984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.821045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.821160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.821209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.821271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.821320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.821370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.821425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.821476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.821524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.821570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.821616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.821666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.581 [2024-04-18 17:25:21.821711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.821762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.821811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.821857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.821911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.821957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.822997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.823977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.824992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.825039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.825084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.582 [2024-04-18 17:25:21.825132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.825181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.825226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.825271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.825321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.825371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.825434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.825479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.825529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.825574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.825621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.825729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.825783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.825831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.825885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.825935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.825983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.826951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.827981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.828027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.828140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.828219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.828285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.828333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.828390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.828440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.828489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.583 [2024-04-18 17:25:21.828538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.828594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.828643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.828692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.828739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.828789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.828836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.828885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.828958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.829968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.830962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.831956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.832004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.832057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.584 [2024-04-18 17:25:21.832101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.832150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.832196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.832252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.832300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.832346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.832401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.832451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.832497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.832552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.832599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.832693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.832742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.832818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.832884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.832931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.832980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.833973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.834983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.835970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.836018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.836077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.836138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.836185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.836299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.836351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.836407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.585 [2024-04-18 17:25:21.836455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.836503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.836552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.836607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.836656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.836707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.836756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.836805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.836849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.836905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.837969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.838985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.839988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.840036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.840081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.840128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.840173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.840225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.840272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.586 [2024-04-18 17:25:21.840321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.840370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.840431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.840543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.840609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.840660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.840709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.840757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.840804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.840855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.840902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.840951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.840998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.841966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.842961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.843985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.844904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.845008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.845054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.845113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.845169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.845215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.587 [2024-04-18 17:25:21.845267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.845326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.845398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.845447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.845497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.845543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.845594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.845640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.845793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.845847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.845913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.845964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.846964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.847992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.848990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.849980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.588 [2024-04-18 17:25:21.850027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.850930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.851934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.852968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.853973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.854026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.854073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.854123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.854169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.854214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.854260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.854306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.854352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.854479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.854528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.854589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.854639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.854693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.854742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.854790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.589 [2024-04-18 17:25:21.854837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.854881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.854927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.854973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.855965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.856996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.857043] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.857094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.857143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.857242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.857289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.857352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:08:55.590 [2024-04-18 17:25:21.857408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.857456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.857504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.857553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.857598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.857644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.857691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.857739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.857803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.857866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.857981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.858998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.859066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.859131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.859193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.859243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.859290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.859342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.859398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.859449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.859502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.859548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.859594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.859646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.859691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.859843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.859913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.859963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.860008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.590 [2024-04-18 17:25:21.860062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.860980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.861987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.862988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.863988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.864038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.864087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.864148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.864212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.864260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.864308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.864428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.864482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.864527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.864574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.591 [2024-04-18 17:25:21.864618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.864665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.864713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.864759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.864809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.864856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.864904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.864958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.865983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.866077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.866126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.866187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.866239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.866287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.866338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.866406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.866474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.866520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.866570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.866615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.866666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.866710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.866854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 [2024-04-18 17:25:21.866902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.592 17:25:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.854 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.854 [2024-04-18 17:25:22.099897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.099953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.099997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.100995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.101975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.102017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.102062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.102104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.102151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.102195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.854 [2024-04-18 17:25:22.102235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.102275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.102316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.102375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.102431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.102535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.102588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.102633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.102696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.102742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.102787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.102829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.102874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.102915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.102957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.103956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.104991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.105975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.106980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.855 [2024-04-18 17:25:22.107023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.107996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.108994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.109890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.110997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.111045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.111089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.111136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.111184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.111230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.111270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.111315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.111359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.111435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.111480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.111525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.111574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.111716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.111764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.856 [2024-04-18 17:25:22.111810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.111853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.111896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.111947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.111990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.112999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.113969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.114982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115042] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.115940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.116000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.116047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.116095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.116139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.116189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.116237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.116280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.857 [2024-04-18 17:25:22.116327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.116373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.116426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.116474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.116594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.116641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.116704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.116753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.116798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.116843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.116887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.116929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.116974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.117978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.118988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.119034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 17:25:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:08:55.858 [2024-04-18 17:25:22.119081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.119127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.119211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.119259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.119318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 17:25:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:55.858 [2024-04-18 17:25:22.119389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.119438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.119483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.119531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.119577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.119622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.119685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.119729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.119774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.119817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.119937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.119985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.858 [2024-04-18 17:25:22.120926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.120985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.121980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.122998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123044] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.123985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.124029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.124075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.124118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.124167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.124213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.124319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.124388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.124435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.124483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.124531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.124577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.859 [2024-04-18 17:25:22.124622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.124691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.124737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.124786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.124829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.124869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.124915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.125917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.126983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.127982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.128992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.129039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.129083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.129130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.129182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.129229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.860 [2024-04-18 17:25:22.129274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.861 [2024-04-18 17:25:22.129399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.861 [2024-04-18 17:25:22.129448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.861 [2024-04-18 17:25:22.129508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.861 [2024-04-18 17:25:22.129563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.861 [2024-04-18 17:25:22.129610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.861 [2024-04-18 17:25:22.129657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.861 [2024-04-18 17:25:22.129719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.861 [2024-04-18 17:25:22.129768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.861 [2024-04-18 17:25:22.129812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.861 [2024-04-18 17:25:22.129859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.861 [2024-04-18 17:25:22.129905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.861 [2024-04-18 17:25:22.129954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.861 [2024-04-18 17:25:22.129999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.861 [2024-04-18 17:25:22.130115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:08:55.861 true 00:08:55.861 17:25:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:55.861 17:25:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.799 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:56.799 17:25:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.056 17:25:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:08:57.056 17:25:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:57.314 true 00:08:57.314 17:25:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:57.314 17:25:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.572 17:25:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.830 17:25:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:08:57.830 17:25:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:57.830 true 00:08:58.089 17:25:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:58.089 17:25:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.658 17:25:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.916 17:25:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:08:58.916 17:25:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:59.174 true 00:08:59.174 17:25:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:59.174 17:25:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.433 17:25:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.692 17:25:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:08:59.692 17:25:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:59.949 true 00:08:59.949 17:25:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:08:59.949 17:25:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.206 17:25:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.464 17:25:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:09:00.464 17:25:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:00.723 true 00:09:00.723 17:25:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:09:00.723 17:25:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.723 Initializing NVMe Controllers 00:09:00.723 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:00.723 Controller IO queue size 128, less than required. 00:09:00.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:00.723 Controller IO queue size 128, less than required. 00:09:00.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:00.723 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:00.723 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:00.723 Initialization complete. Launching workers. 00:09:00.723 ======================================================== 00:09:00.723 Latency(us) 00:09:00.723 Device Information : IOPS MiB/s Average min max 00:09:00.723 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7117.80 3.48 14798.66 1141.78 1167971.35 00:09:00.723 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 26142.20 12.76 4896.18 2102.66 366713.13 00:09:00.723 ======================================================== 00:09:00.723 Total : 33260.00 16.24 7015.36 1141.78 1167971.35 00:09:00.723 00:09:00.982 17:25:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.240 17:25:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:09:01.240 17:25:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:01.499 true 00:09:01.499 17:25:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1944685 00:09:01.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (1944685) - No such process 00:09:01.499 17:25:27 -- target/ns_hotplug_stress.sh@44 -- # wait 1944685 00:09:01.499 17:25:27 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:09:01.499 17:25:27 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:09:01.499 17:25:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:01.499 17:25:27 -- nvmf/common.sh@117 -- # sync 00:09:01.499 17:25:27 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:01.499 17:25:27 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:01.499 17:25:27 -- nvmf/common.sh@120 -- # set +e 00:09:01.499 17:25:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.499 17:25:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:01.499 rmmod nvme_rdma 00:09:01.499 rmmod nvme_fabrics 00:09:01.499 17:25:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.499 17:25:27 -- nvmf/common.sh@124 -- # set -e 00:09:01.499 17:25:27 -- nvmf/common.sh@125 -- # return 0 00:09:01.499 17:25:27 -- nvmf/common.sh@478 -- # '[' -n 1944270 ']' 00:09:01.499 17:25:27 -- nvmf/common.sh@479 -- # killprocess 1944270 00:09:01.499 17:25:27 -- common/autotest_common.sh@936 -- # '[' -z 1944270 ']' 00:09:01.499 17:25:27 -- common/autotest_common.sh@940 -- # kill -0 1944270 00:09:01.499 17:25:27 -- common/autotest_common.sh@941 -- # uname 00:09:01.499 17:25:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:01.499 17:25:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1944270 00:09:01.499 17:25:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:01.499 17:25:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:01.499 17:25:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1944270' 00:09:01.499 killing process with pid 1944270 00:09:01.499 17:25:27 -- common/autotest_common.sh@955 -- # kill 1944270 00:09:01.499 17:25:27 -- common/autotest_common.sh@960 -- # wait 1944270 00:09:02.096 17:25:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:02.096 17:25:28 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:02.096 00:09:02.096 real 0m38.476s 00:09:02.096 user 2m40.853s 00:09:02.096 sys 0m5.663s 00:09:02.096 17:25:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:02.096 17:25:28 -- common/autotest_common.sh@10 -- # set +x 00:09:02.096 ************************************ 00:09:02.096 END TEST nvmf_ns_hotplug_stress 00:09:02.096 ************************************ 00:09:02.096 17:25:28 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:09:02.096 17:25:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:02.096 17:25:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.096 17:25:28 -- common/autotest_common.sh@10 -- # set +x 00:09:02.096 ************************************ 00:09:02.096 START TEST nvmf_connect_stress 00:09:02.096 ************************************ 00:09:02.096 17:25:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:09:02.096 * Looking for test storage... 00:09:02.096 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:02.096 17:25:28 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.096 17:25:28 -- nvmf/common.sh@7 -- # uname -s 00:09:02.096 17:25:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.096 17:25:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.096 17:25:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.096 17:25:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.096 17:25:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.096 17:25:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.096 17:25:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.096 17:25:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.096 17:25:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.096 17:25:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.096 17:25:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:02.096 17:25:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:02.096 17:25:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.096 17:25:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.096 17:25:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.096 17:25:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.096 17:25:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:02.096 17:25:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.096 17:25:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.096 17:25:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.096 17:25:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.096 17:25:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.096 17:25:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.096 17:25:28 -- paths/export.sh@5 -- # export PATH 00:09:02.097 17:25:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.097 17:25:28 -- nvmf/common.sh@47 -- # : 0 00:09:02.097 17:25:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.097 17:25:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.097 17:25:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.097 17:25:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.097 17:25:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.097 17:25:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.097 17:25:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.097 17:25:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.097 17:25:28 -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:02.097 17:25:28 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:02.097 17:25:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.097 17:25:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:02.097 17:25:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:02.097 17:25:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:02.097 17:25:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.097 17:25:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.097 17:25:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.097 17:25:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:02.097 17:25:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:02.097 17:25:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:02.097 17:25:28 -- common/autotest_common.sh@10 -- # set +x 00:09:04.002 17:25:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:04.002 17:25:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:04.002 17:25:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:04.002 17:25:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:04.002 17:25:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:04.002 17:25:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:04.002 17:25:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:04.002 17:25:30 -- nvmf/common.sh@295 -- # net_devs=() 00:09:04.002 17:25:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:04.002 17:25:30 -- nvmf/common.sh@296 -- # e810=() 00:09:04.002 17:25:30 -- nvmf/common.sh@296 -- # local -ga e810 00:09:04.002 17:25:30 -- nvmf/common.sh@297 -- # x722=() 00:09:04.002 17:25:30 -- nvmf/common.sh@297 -- # local -ga x722 00:09:04.002 17:25:30 -- nvmf/common.sh@298 -- # mlx=() 00:09:04.002 17:25:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:04.002 17:25:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.002 17:25:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.002 17:25:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.002 17:25:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.002 17:25:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.002 17:25:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.002 17:25:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.002 17:25:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.002 17:25:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.002 17:25:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.002 17:25:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.002 17:25:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:04.002 17:25:30 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:04.002 17:25:30 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:04.002 17:25:30 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:04.002 17:25:30 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:04.002 17:25:30 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:04.002 17:25:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:04.002 17:25:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.003 17:25:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:09:04.003 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:09:04.003 17:25:30 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:04.003 17:25:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.003 17:25:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:09:04.003 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:09:04.003 17:25:30 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:04.003 17:25:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:04.003 17:25:30 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.003 17:25:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.003 17:25:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:04.003 17:25:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.003 17:25:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:09:04.003 Found net devices under 0000:09:00.0: mlx_0_0 00:09:04.003 17:25:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.003 17:25:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.003 17:25:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.003 17:25:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:04.003 17:25:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.003 17:25:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:09:04.003 Found net devices under 0000:09:00.1: mlx_0_1 00:09:04.003 17:25:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.003 17:25:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:04.003 17:25:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:04.003 17:25:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:04.003 17:25:30 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:04.003 17:25:30 -- nvmf/common.sh@58 -- # uname 00:09:04.003 17:25:30 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:04.003 17:25:30 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:04.003 17:25:30 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:04.003 17:25:30 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:04.003 17:25:30 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:04.003 17:25:30 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:04.003 17:25:30 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:04.003 17:25:30 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:04.003 17:25:30 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:04.003 17:25:30 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:04.003 17:25:30 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:04.003 17:25:30 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:04.003 17:25:30 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:04.003 17:25:30 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:04.003 17:25:30 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:04.003 17:25:30 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:04.003 17:25:30 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:04.003 17:25:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.003 17:25:30 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:04.003 17:25:30 -- nvmf/common.sh@105 -- # continue 2 00:09:04.003 17:25:30 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:04.003 17:25:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.003 17:25:30 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.003 17:25:30 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:04.003 17:25:30 -- nvmf/common.sh@105 -- # continue 2 00:09:04.003 17:25:30 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:04.003 17:25:30 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:04.003 17:25:30 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:04.003 17:25:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:04.003 17:25:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:04.003 17:25:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:04.003 17:25:30 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:04.003 17:25:30 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:04.003 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:04.003 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:09:04.003 altname enp9s0f0np0 00:09:04.003 inet 192.168.100.8/24 scope global mlx_0_0 00:09:04.003 valid_lft forever preferred_lft forever 00:09:04.003 17:25:30 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:04.003 17:25:30 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:04.003 17:25:30 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:04.003 17:25:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:04.003 17:25:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:04.003 17:25:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:04.003 17:25:30 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:04.003 17:25:30 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:04.003 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:04.003 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:09:04.003 altname enp9s0f1np1 00:09:04.003 inet 192.168.100.9/24 scope global mlx_0_1 00:09:04.003 valid_lft forever preferred_lft forever 00:09:04.003 17:25:30 -- nvmf/common.sh@411 -- # return 0 00:09:04.003 17:25:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:04.003 17:25:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:04.003 17:25:30 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:04.003 17:25:30 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:04.003 17:25:30 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:04.003 17:25:30 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:04.003 17:25:30 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:04.003 17:25:30 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:04.003 17:25:30 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:04.003 17:25:30 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:04.003 17:25:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.003 17:25:30 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:04.003 17:25:30 -- nvmf/common.sh@105 -- # continue 2 00:09:04.003 17:25:30 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:04.003 17:25:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.003 17:25:30 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.003 17:25:30 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:04.003 17:25:30 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:04.003 17:25:30 -- nvmf/common.sh@105 -- # continue 2 00:09:04.003 17:25:30 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:04.003 17:25:30 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:04.004 17:25:30 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:04.004 17:25:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:04.004 17:25:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:04.004 17:25:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:04.004 17:25:30 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:04.004 17:25:30 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:04.004 17:25:30 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:04.004 17:25:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:04.004 17:25:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:04.004 17:25:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:04.004 17:25:30 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:04.004 192.168.100.9' 00:09:04.004 17:25:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:04.004 192.168.100.9' 00:09:04.004 17:25:30 -- nvmf/common.sh@446 -- # head -n 1 00:09:04.004 17:25:30 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:04.004 17:25:30 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:04.004 192.168.100.9' 00:09:04.004 17:25:30 -- nvmf/common.sh@447 -- # tail -n +2 00:09:04.004 17:25:30 -- nvmf/common.sh@447 -- # head -n 1 00:09:04.004 17:25:30 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:04.004 17:25:30 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:04.004 17:25:30 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:04.004 17:25:30 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:04.004 17:25:30 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:04.004 17:25:30 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:04.004 17:25:30 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:04.004 17:25:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:04.004 17:25:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:04.004 17:25:30 -- common/autotest_common.sh@10 -- # set +x 00:09:04.004 17:25:30 -- nvmf/common.sh@470 -- # nvmfpid=1950402 00:09:04.004 17:25:30 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:04.004 17:25:30 -- nvmf/common.sh@471 -- # waitforlisten 1950402 00:09:04.004 17:25:30 -- common/autotest_common.sh@817 -- # '[' -z 1950402 ']' 00:09:04.004 17:25:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.004 17:25:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:04.004 17:25:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.004 17:25:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:04.004 17:25:30 -- common/autotest_common.sh@10 -- # set +x 00:09:04.263 [2024-04-18 17:25:30.556934] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:09:04.263 [2024-04-18 17:25:30.557010] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.263 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.263 [2024-04-18 17:25:30.620744] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:04.263 [2024-04-18 17:25:30.734695] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.263 [2024-04-18 17:25:30.734767] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.263 [2024-04-18 17:25:30.734792] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.263 [2024-04-18 17:25:30.734814] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.263 [2024-04-18 17:25:30.734826] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.263 [2024-04-18 17:25:30.734912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.263 [2024-04-18 17:25:30.735029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.263 [2024-04-18 17:25:30.735031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.201 17:25:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:05.201 17:25:31 -- common/autotest_common.sh@850 -- # return 0 00:09:05.201 17:25:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:05.201 17:25:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:05.201 17:25:31 -- common/autotest_common.sh@10 -- # set +x 00:09:05.201 17:25:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.201 17:25:31 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:05.201 17:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.201 17:25:31 -- common/autotest_common.sh@10 -- # set +x 00:09:05.201 [2024-04-18 17:25:31.574448] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e84380/0x1e88870) succeed. 00:09:05.201 [2024-04-18 17:25:31.584834] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e858d0/0x1ec9f00) succeed. 00:09:05.201 17:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.201 17:25:31 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:05.201 17:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.201 17:25:31 -- common/autotest_common.sh@10 -- # set +x 00:09:05.201 17:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.201 17:25:31 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:05.201 17:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.201 17:25:31 -- common/autotest_common.sh@10 -- # set +x 00:09:05.202 [2024-04-18 17:25:31.722207] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:05.202 17:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.202 17:25:31 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:05.202 17:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.202 17:25:31 -- common/autotest_common.sh@10 -- # set +x 00:09:05.202 NULL1 00:09:05.202 17:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.202 17:25:31 -- target/connect_stress.sh@21 -- # PERF_PID=1950559 00:09:05.202 17:25:31 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:05.202 17:25:31 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:05.202 17:25:31 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:05.202 17:25:31 -- target/connect_stress.sh@27 -- # seq 1 20 00:09:05.202 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.202 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.462 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.462 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.462 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.462 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.462 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.462 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.462 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.462 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.462 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.462 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.462 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.462 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.462 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.462 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.462 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.462 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.462 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.462 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.462 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.462 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.462 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.462 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.462 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.463 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.463 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.463 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.463 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.463 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.463 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.463 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.463 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.463 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.463 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.463 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.463 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.463 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.463 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.463 17:25:31 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:05.463 17:25:31 -- target/connect_stress.sh@28 -- # cat 00:09:05.463 17:25:31 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:05.463 17:25:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.463 17:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.463 17:25:31 -- common/autotest_common.sh@10 -- # set +x 00:09:05.722 17:25:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.722 17:25:32 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:05.722 17:25:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.722 17:25:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.722 17:25:32 -- common/autotest_common.sh@10 -- # set +x 00:09:05.983 17:25:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.983 17:25:32 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:05.983 17:25:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:05.983 17:25:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.983 17:25:32 -- common/autotest_common.sh@10 -- # set +x 00:09:06.242 17:25:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:06.242 17:25:32 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:06.242 17:25:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.242 17:25:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:06.242 17:25:32 -- common/autotest_common.sh@10 -- # set +x 00:09:06.812 17:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:06.812 17:25:33 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:06.812 17:25:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:06.812 17:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:06.812 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:09:07.071 17:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.071 17:25:33 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:07.071 17:25:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.071 17:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.071 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:09:07.328 17:25:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.328 17:25:33 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:07.328 17:25:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.328 17:25:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.328 17:25:33 -- common/autotest_common.sh@10 -- # set +x 00:09:07.588 17:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.588 17:25:34 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:07.588 17:25:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.588 17:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.588 17:25:34 -- common/autotest_common.sh@10 -- # set +x 00:09:07.847 17:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.847 17:25:34 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:07.847 17:25:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.847 17:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.847 17:25:34 -- common/autotest_common.sh@10 -- # set +x 00:09:08.416 17:25:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.416 17:25:34 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:08.416 17:25:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.416 17:25:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.416 17:25:34 -- common/autotest_common.sh@10 -- # set +x 00:09:08.675 17:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.675 17:25:35 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:08.675 17:25:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.675 17:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.675 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:09:08.938 17:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.938 17:25:35 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:08.938 17:25:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.938 17:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.938 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:09:09.198 17:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.198 17:25:35 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:09.198 17:25:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.198 17:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.198 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:09:09.457 17:25:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.457 17:25:35 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:09.457 17:25:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.457 17:25:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.457 17:25:35 -- common/autotest_common.sh@10 -- # set +x 00:09:10.025 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.025 17:25:36 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:10.025 17:25:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.025 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.025 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:09:10.285 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.285 17:25:36 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:10.285 17:25:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.285 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.285 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:09:10.543 17:25:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.543 17:25:36 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:10.543 17:25:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.543 17:25:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.543 17:25:36 -- common/autotest_common.sh@10 -- # set +x 00:09:10.802 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.802 17:25:37 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:10.802 17:25:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.802 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.802 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:09:11.062 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.062 17:25:37 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:11.062 17:25:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.062 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.062 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:09:11.631 17:25:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.631 17:25:37 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:11.631 17:25:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.631 17:25:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.631 17:25:37 -- common/autotest_common.sh@10 -- # set +x 00:09:11.891 17:25:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.891 17:25:38 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:11.891 17:25:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.891 17:25:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.891 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:09:12.150 17:25:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.150 17:25:38 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:12.150 17:25:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.150 17:25:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.150 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:09:12.410 17:25:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.410 17:25:38 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:12.410 17:25:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.410 17:25:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.410 17:25:38 -- common/autotest_common.sh@10 -- # set +x 00:09:12.670 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.670 17:25:39 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:12.670 17:25:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.670 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.670 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:09:13.240 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.240 17:25:39 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:13.240 17:25:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.240 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.240 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:09:13.499 17:25:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.499 17:25:39 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:13.499 17:25:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.499 17:25:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.499 17:25:39 -- common/autotest_common.sh@10 -- # set +x 00:09:13.757 17:25:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.757 17:25:40 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:13.757 17:25:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.757 17:25:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.757 17:25:40 -- common/autotest_common.sh@10 -- # set +x 00:09:14.015 17:25:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.015 17:25:40 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:14.015 17:25:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:14.015 17:25:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.015 17:25:40 -- common/autotest_common.sh@10 -- # set +x 00:09:14.273 17:25:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.273 17:25:40 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:14.273 17:25:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:14.273 17:25:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.273 17:25:40 -- common/autotest_common.sh@10 -- # set +x 00:09:14.867 17:25:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.867 17:25:41 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:14.867 17:25:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:14.867 17:25:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.867 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:09:15.125 17:25:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.125 17:25:41 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:15.125 17:25:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:15.125 17:25:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.125 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:09:15.383 17:25:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.383 17:25:41 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:15.383 17:25:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:15.383 17:25:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.383 17:25:41 -- common/autotest_common.sh@10 -- # set +x 00:09:15.643 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:15.643 17:25:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.643 17:25:42 -- target/connect_stress.sh@34 -- # kill -0 1950559 00:09:15.643 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1950559) - No such process 00:09:15.643 17:25:42 -- target/connect_stress.sh@38 -- # wait 1950559 00:09:15.643 17:25:42 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:15.643 17:25:42 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:15.643 17:25:42 -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:15.643 17:25:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:15.643 17:25:42 -- nvmf/common.sh@117 -- # sync 00:09:15.643 17:25:42 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:15.643 17:25:42 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:15.643 17:25:42 -- nvmf/common.sh@120 -- # set +e 00:09:15.643 17:25:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:15.643 17:25:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:15.643 rmmod nvme_rdma 00:09:15.643 rmmod nvme_fabrics 00:09:15.643 17:25:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:15.643 17:25:42 -- nvmf/common.sh@124 -- # set -e 00:09:15.643 17:25:42 -- nvmf/common.sh@125 -- # return 0 00:09:15.643 17:25:42 -- nvmf/common.sh@478 -- # '[' -n 1950402 ']' 00:09:15.643 17:25:42 -- nvmf/common.sh@479 -- # killprocess 1950402 00:09:15.643 17:25:42 -- common/autotest_common.sh@936 -- # '[' -z 1950402 ']' 00:09:15.643 17:25:42 -- common/autotest_common.sh@940 -- # kill -0 1950402 00:09:15.643 17:25:42 -- common/autotest_common.sh@941 -- # uname 00:09:15.643 17:25:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:15.643 17:25:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1950402 00:09:15.643 17:25:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:15.643 17:25:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:15.643 17:25:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1950402' 00:09:15.643 killing process with pid 1950402 00:09:15.643 17:25:42 -- common/autotest_common.sh@955 -- # kill 1950402 00:09:15.643 17:25:42 -- common/autotest_common.sh@960 -- # wait 1950402 00:09:16.214 17:25:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:16.214 17:25:42 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:16.214 00:09:16.214 real 0m14.101s 00:09:16.214 user 0m41.711s 00:09:16.214 sys 0m3.783s 00:09:16.214 17:25:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:16.214 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:09:16.214 ************************************ 00:09:16.214 END TEST nvmf_connect_stress 00:09:16.214 ************************************ 00:09:16.214 17:25:42 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:09:16.214 17:25:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:16.214 17:25:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:16.214 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:09:16.214 ************************************ 00:09:16.214 START TEST nvmf_fused_ordering 00:09:16.214 ************************************ 00:09:16.214 17:25:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:09:16.214 * Looking for test storage... 00:09:16.214 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:16.214 17:25:42 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.214 17:25:42 -- nvmf/common.sh@7 -- # uname -s 00:09:16.214 17:25:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.214 17:25:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.214 17:25:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.214 17:25:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.214 17:25:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.214 17:25:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.214 17:25:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.214 17:25:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.214 17:25:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.214 17:25:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.214 17:25:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:16.214 17:25:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:16.214 17:25:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.214 17:25:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.214 17:25:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.214 17:25:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.214 17:25:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:16.214 17:25:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.214 17:25:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.214 17:25:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.214 17:25:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.214 17:25:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.214 17:25:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.214 17:25:42 -- paths/export.sh@5 -- # export PATH 00:09:16.214 17:25:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.214 17:25:42 -- nvmf/common.sh@47 -- # : 0 00:09:16.214 17:25:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:16.214 17:25:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:16.214 17:25:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.214 17:25:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.214 17:25:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.214 17:25:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:16.214 17:25:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:16.214 17:25:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:16.214 17:25:42 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:16.214 17:25:42 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:16.214 17:25:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.214 17:25:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:16.214 17:25:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:16.214 17:25:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:16.215 17:25:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.215 17:25:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:16.215 17:25:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.215 17:25:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:16.215 17:25:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:16.215 17:25:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:16.215 17:25:42 -- common/autotest_common.sh@10 -- # set +x 00:09:18.128 17:25:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:18.128 17:25:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:18.128 17:25:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:18.128 17:25:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:18.128 17:25:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:18.128 17:25:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:18.128 17:25:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:18.128 17:25:44 -- nvmf/common.sh@295 -- # net_devs=() 00:09:18.128 17:25:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:18.128 17:25:44 -- nvmf/common.sh@296 -- # e810=() 00:09:18.128 17:25:44 -- nvmf/common.sh@296 -- # local -ga e810 00:09:18.128 17:25:44 -- nvmf/common.sh@297 -- # x722=() 00:09:18.128 17:25:44 -- nvmf/common.sh@297 -- # local -ga x722 00:09:18.128 17:25:44 -- nvmf/common.sh@298 -- # mlx=() 00:09:18.128 17:25:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:18.128 17:25:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.128 17:25:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.128 17:25:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.128 17:25:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.128 17:25:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.128 17:25:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.128 17:25:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.128 17:25:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.128 17:25:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.128 17:25:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.128 17:25:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.128 17:25:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:18.128 17:25:44 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:18.128 17:25:44 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:18.128 17:25:44 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:18.128 17:25:44 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:18.128 17:25:44 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:18.128 17:25:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:18.128 17:25:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.128 17:25:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:09:18.128 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:09:18.128 17:25:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:18.128 17:25:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:18.128 17:25:44 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:18.128 17:25:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:18.128 17:25:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.128 17:25:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:09:18.128 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:09:18.128 17:25:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:18.128 17:25:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:18.128 17:25:44 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:18.128 17:25:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:18.128 17:25:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:18.128 17:25:44 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:18.128 17:25:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.128 17:25:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.128 17:25:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:18.128 17:25:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.128 17:25:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:09:18.128 Found net devices under 0000:09:00.0: mlx_0_0 00:09:18.128 17:25:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.128 17:25:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.128 17:25:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.128 17:25:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:18.128 17:25:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.128 17:25:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:09:18.128 Found net devices under 0000:09:00.1: mlx_0_1 00:09:18.128 17:25:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.128 17:25:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:18.128 17:25:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:18.128 17:25:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:18.128 17:25:44 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:18.128 17:25:44 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:18.128 17:25:44 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:18.128 17:25:44 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:18.128 17:25:44 -- nvmf/common.sh@58 -- # uname 00:09:18.128 17:25:44 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:18.128 17:25:44 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:18.390 17:25:44 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:18.390 17:25:44 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:18.390 17:25:44 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:18.390 17:25:44 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:18.390 17:25:44 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:18.390 17:25:44 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:18.390 17:25:44 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:18.390 17:25:44 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:18.390 17:25:44 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:18.390 17:25:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:18.390 17:25:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:18.390 17:25:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:18.390 17:25:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:18.390 17:25:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:18.390 17:25:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:18.390 17:25:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.390 17:25:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:18.390 17:25:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:18.390 17:25:44 -- nvmf/common.sh@105 -- # continue 2 00:09:18.390 17:25:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:18.390 17:25:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.390 17:25:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:18.390 17:25:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.390 17:25:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:18.390 17:25:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:18.390 17:25:44 -- nvmf/common.sh@105 -- # continue 2 00:09:18.390 17:25:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:18.390 17:25:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:18.390 17:25:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:18.390 17:25:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:18.390 17:25:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:18.390 17:25:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:18.390 17:25:44 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:18.390 17:25:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:18.390 17:25:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:18.390 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:18.390 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:09:18.390 altname enp9s0f0np0 00:09:18.390 inet 192.168.100.8/24 scope global mlx_0_0 00:09:18.390 valid_lft forever preferred_lft forever 00:09:18.390 17:25:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:18.390 17:25:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:18.390 17:25:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:18.390 17:25:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:18.390 17:25:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:18.390 17:25:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:18.390 17:25:44 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:18.390 17:25:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:18.390 17:25:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:18.390 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:18.390 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:09:18.390 altname enp9s0f1np1 00:09:18.390 inet 192.168.100.9/24 scope global mlx_0_1 00:09:18.390 valid_lft forever preferred_lft forever 00:09:18.390 17:25:44 -- nvmf/common.sh@411 -- # return 0 00:09:18.390 17:25:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:18.390 17:25:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:18.390 17:25:44 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:18.390 17:25:44 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:18.390 17:25:44 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:18.390 17:25:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:18.390 17:25:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:18.390 17:25:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:18.390 17:25:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:18.390 17:25:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:18.390 17:25:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:18.390 17:25:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.390 17:25:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:18.390 17:25:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:18.390 17:25:44 -- nvmf/common.sh@105 -- # continue 2 00:09:18.390 17:25:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:18.390 17:25:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.390 17:25:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:18.390 17:25:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.390 17:25:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:18.390 17:25:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:18.390 17:25:44 -- nvmf/common.sh@105 -- # continue 2 00:09:18.390 17:25:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:18.390 17:25:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:18.390 17:25:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:18.390 17:25:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:18.390 17:25:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:18.390 17:25:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:18.390 17:25:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:18.390 17:25:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:18.391 17:25:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:18.391 17:25:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:18.391 17:25:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:18.391 17:25:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:18.391 17:25:44 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:18.391 192.168.100.9' 00:09:18.391 17:25:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:18.391 192.168.100.9' 00:09:18.391 17:25:44 -- nvmf/common.sh@446 -- # head -n 1 00:09:18.391 17:25:44 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:18.391 17:25:44 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:18.391 192.168.100.9' 00:09:18.391 17:25:44 -- nvmf/common.sh@447 -- # tail -n +2 00:09:18.391 17:25:44 -- nvmf/common.sh@447 -- # head -n 1 00:09:18.391 17:25:44 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:18.391 17:25:44 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:18.391 17:25:44 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:18.391 17:25:44 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:18.391 17:25:44 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:18.391 17:25:44 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:18.391 17:25:44 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:18.391 17:25:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:18.391 17:25:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:18.391 17:25:44 -- common/autotest_common.sh@10 -- # set +x 00:09:18.391 17:25:44 -- nvmf/common.sh@470 -- # nvmfpid=1953449 00:09:18.391 17:25:44 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:18.391 17:25:44 -- nvmf/common.sh@471 -- # waitforlisten 1953449 00:09:18.391 17:25:44 -- common/autotest_common.sh@817 -- # '[' -z 1953449 ']' 00:09:18.391 17:25:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.391 17:25:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:18.391 17:25:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.391 17:25:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:18.391 17:25:44 -- common/autotest_common.sh@10 -- # set +x 00:09:18.391 [2024-04-18 17:25:44.825884] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:09:18.391 [2024-04-18 17:25:44.825963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.391 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.391 [2024-04-18 17:25:44.883127] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.651 [2024-04-18 17:25:44.993472] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.651 [2024-04-18 17:25:44.993538] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.651 [2024-04-18 17:25:44.993552] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.651 [2024-04-18 17:25:44.993563] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.651 [2024-04-18 17:25:44.993574] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.651 [2024-04-18 17:25:44.993602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.651 17:25:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:18.651 17:25:45 -- common/autotest_common.sh@850 -- # return 0 00:09:18.651 17:25:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:18.651 17:25:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:18.651 17:25:45 -- common/autotest_common.sh@10 -- # set +x 00:09:18.651 17:25:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.651 17:25:45 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:18.651 17:25:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:18.651 17:25:45 -- common/autotest_common.sh@10 -- # set +x 00:09:18.651 [2024-04-18 17:25:45.161245] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21f9cb0/0x21fe1a0) succeed. 00:09:18.651 [2024-04-18 17:25:45.172997] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21fb1b0/0x223f830) succeed. 00:09:18.910 17:25:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:18.910 17:25:45 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:18.910 17:25:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:18.910 17:25:45 -- common/autotest_common.sh@10 -- # set +x 00:09:18.910 17:25:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:18.910 17:25:45 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:18.910 17:25:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:18.910 17:25:45 -- common/autotest_common.sh@10 -- # set +x 00:09:18.910 [2024-04-18 17:25:45.240090] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:18.910 17:25:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:18.910 17:25:45 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:18.910 17:25:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:18.910 17:25:45 -- common/autotest_common.sh@10 -- # set +x 00:09:18.910 NULL1 00:09:18.910 17:25:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:18.910 17:25:45 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:18.910 17:25:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:18.910 17:25:45 -- common/autotest_common.sh@10 -- # set +x 00:09:18.910 17:25:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:18.910 17:25:45 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:18.910 17:25:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:18.910 17:25:45 -- common/autotest_common.sh@10 -- # set +x 00:09:18.910 17:25:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:18.910 17:25:45 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:18.910 [2024-04-18 17:25:45.284449] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:09:18.910 [2024-04-18 17:25:45.284489] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953593 ] 00:09:18.910 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.169 Attached to nqn.2016-06.io.spdk:cnode1 00:09:19.169 Namespace ID: 1 size: 1GB 00:09:19.169 fused_ordering(0) 00:09:19.169 fused_ordering(1) 00:09:19.169 fused_ordering(2) 00:09:19.169 fused_ordering(3) 00:09:19.169 fused_ordering(4) 00:09:19.169 fused_ordering(5) 00:09:19.169 fused_ordering(6) 00:09:19.169 fused_ordering(7) 00:09:19.169 fused_ordering(8) 00:09:19.169 fused_ordering(9) 00:09:19.169 fused_ordering(10) 00:09:19.169 fused_ordering(11) 00:09:19.169 fused_ordering(12) 00:09:19.169 fused_ordering(13) 00:09:19.169 fused_ordering(14) 00:09:19.169 fused_ordering(15) 00:09:19.169 fused_ordering(16) 00:09:19.169 fused_ordering(17) 00:09:19.169 fused_ordering(18) 00:09:19.169 fused_ordering(19) 00:09:19.169 fused_ordering(20) 00:09:19.169 fused_ordering(21) 00:09:19.169 fused_ordering(22) 00:09:19.169 fused_ordering(23) 00:09:19.169 fused_ordering(24) 00:09:19.169 fused_ordering(25) 00:09:19.169 fused_ordering(26) 00:09:19.169 fused_ordering(27) 00:09:19.169 fused_ordering(28) 00:09:19.169 fused_ordering(29) 00:09:19.169 fused_ordering(30) 00:09:19.169 fused_ordering(31) 00:09:19.169 fused_ordering(32) 00:09:19.169 fused_ordering(33) 00:09:19.169 fused_ordering(34) 00:09:19.169 fused_ordering(35) 00:09:19.169 fused_ordering(36) 00:09:19.169 fused_ordering(37) 00:09:19.169 fused_ordering(38) 00:09:19.169 fused_ordering(39) 00:09:19.169 fused_ordering(40) 00:09:19.169 fused_ordering(41) 00:09:19.169 fused_ordering(42) 00:09:19.169 fused_ordering(43) 00:09:19.169 fused_ordering(44) 00:09:19.169 fused_ordering(45) 00:09:19.169 fused_ordering(46) 00:09:19.169 fused_ordering(47) 00:09:19.169 fused_ordering(48) 00:09:19.169 fused_ordering(49) 00:09:19.169 fused_ordering(50) 00:09:19.169 fused_ordering(51) 00:09:19.169 fused_ordering(52) 00:09:19.169 fused_ordering(53) 00:09:19.169 fused_ordering(54) 00:09:19.169 fused_ordering(55) 00:09:19.169 fused_ordering(56) 00:09:19.169 fused_ordering(57) 00:09:19.170 fused_ordering(58) 00:09:19.170 fused_ordering(59) 00:09:19.170 fused_ordering(60) 00:09:19.170 fused_ordering(61) 00:09:19.170 fused_ordering(62) 00:09:19.170 fused_ordering(63) 00:09:19.170 fused_ordering(64) 00:09:19.170 fused_ordering(65) 00:09:19.170 fused_ordering(66) 00:09:19.170 fused_ordering(67) 00:09:19.170 fused_ordering(68) 00:09:19.170 fused_ordering(69) 00:09:19.170 fused_ordering(70) 00:09:19.170 fused_ordering(71) 00:09:19.170 fused_ordering(72) 00:09:19.170 fused_ordering(73) 00:09:19.170 fused_ordering(74) 00:09:19.170 fused_ordering(75) 00:09:19.170 fused_ordering(76) 00:09:19.170 fused_ordering(77) 00:09:19.170 fused_ordering(78) 00:09:19.170 fused_ordering(79) 00:09:19.170 fused_ordering(80) 00:09:19.170 fused_ordering(81) 00:09:19.170 fused_ordering(82) 00:09:19.170 fused_ordering(83) 00:09:19.170 fused_ordering(84) 00:09:19.170 fused_ordering(85) 00:09:19.170 fused_ordering(86) 00:09:19.170 fused_ordering(87) 00:09:19.170 fused_ordering(88) 00:09:19.170 fused_ordering(89) 00:09:19.170 fused_ordering(90) 00:09:19.170 fused_ordering(91) 00:09:19.170 fused_ordering(92) 00:09:19.170 fused_ordering(93) 00:09:19.170 fused_ordering(94) 00:09:19.170 fused_ordering(95) 00:09:19.170 fused_ordering(96) 00:09:19.170 fused_ordering(97) 00:09:19.170 fused_ordering(98) 00:09:19.170 fused_ordering(99) 00:09:19.170 fused_ordering(100) 00:09:19.170 fused_ordering(101) 00:09:19.170 fused_ordering(102) 00:09:19.170 fused_ordering(103) 00:09:19.170 fused_ordering(104) 00:09:19.170 fused_ordering(105) 00:09:19.170 fused_ordering(106) 00:09:19.170 fused_ordering(107) 00:09:19.170 fused_ordering(108) 00:09:19.170 fused_ordering(109) 00:09:19.170 fused_ordering(110) 00:09:19.170 fused_ordering(111) 00:09:19.170 fused_ordering(112) 00:09:19.170 fused_ordering(113) 00:09:19.170 fused_ordering(114) 00:09:19.170 fused_ordering(115) 00:09:19.170 fused_ordering(116) 00:09:19.170 fused_ordering(117) 00:09:19.170 fused_ordering(118) 00:09:19.170 fused_ordering(119) 00:09:19.170 fused_ordering(120) 00:09:19.170 fused_ordering(121) 00:09:19.170 fused_ordering(122) 00:09:19.170 fused_ordering(123) 00:09:19.170 fused_ordering(124) 00:09:19.170 fused_ordering(125) 00:09:19.170 fused_ordering(126) 00:09:19.170 fused_ordering(127) 00:09:19.170 fused_ordering(128) 00:09:19.170 fused_ordering(129) 00:09:19.170 fused_ordering(130) 00:09:19.170 fused_ordering(131) 00:09:19.170 fused_ordering(132) 00:09:19.170 fused_ordering(133) 00:09:19.170 fused_ordering(134) 00:09:19.170 fused_ordering(135) 00:09:19.170 fused_ordering(136) 00:09:19.170 fused_ordering(137) 00:09:19.170 fused_ordering(138) 00:09:19.170 fused_ordering(139) 00:09:19.170 fused_ordering(140) 00:09:19.170 fused_ordering(141) 00:09:19.170 fused_ordering(142) 00:09:19.170 fused_ordering(143) 00:09:19.170 fused_ordering(144) 00:09:19.170 fused_ordering(145) 00:09:19.170 fused_ordering(146) 00:09:19.170 fused_ordering(147) 00:09:19.170 fused_ordering(148) 00:09:19.170 fused_ordering(149) 00:09:19.170 fused_ordering(150) 00:09:19.170 fused_ordering(151) 00:09:19.170 fused_ordering(152) 00:09:19.170 fused_ordering(153) 00:09:19.170 fused_ordering(154) 00:09:19.170 fused_ordering(155) 00:09:19.170 fused_ordering(156) 00:09:19.170 fused_ordering(157) 00:09:19.170 fused_ordering(158) 00:09:19.170 fused_ordering(159) 00:09:19.170 fused_ordering(160) 00:09:19.170 fused_ordering(161) 00:09:19.170 fused_ordering(162) 00:09:19.170 fused_ordering(163) 00:09:19.170 fused_ordering(164) 00:09:19.170 fused_ordering(165) 00:09:19.170 fused_ordering(166) 00:09:19.170 fused_ordering(167) 00:09:19.170 fused_ordering(168) 00:09:19.170 fused_ordering(169) 00:09:19.170 fused_ordering(170) 00:09:19.170 fused_ordering(171) 00:09:19.170 fused_ordering(172) 00:09:19.170 fused_ordering(173) 00:09:19.170 fused_ordering(174) 00:09:19.170 fused_ordering(175) 00:09:19.170 fused_ordering(176) 00:09:19.170 fused_ordering(177) 00:09:19.170 fused_ordering(178) 00:09:19.170 fused_ordering(179) 00:09:19.170 fused_ordering(180) 00:09:19.170 fused_ordering(181) 00:09:19.170 fused_ordering(182) 00:09:19.170 fused_ordering(183) 00:09:19.170 fused_ordering(184) 00:09:19.170 fused_ordering(185) 00:09:19.170 fused_ordering(186) 00:09:19.170 fused_ordering(187) 00:09:19.170 fused_ordering(188) 00:09:19.170 fused_ordering(189) 00:09:19.170 fused_ordering(190) 00:09:19.170 fused_ordering(191) 00:09:19.170 fused_ordering(192) 00:09:19.170 fused_ordering(193) 00:09:19.170 fused_ordering(194) 00:09:19.170 fused_ordering(195) 00:09:19.170 fused_ordering(196) 00:09:19.170 fused_ordering(197) 00:09:19.170 fused_ordering(198) 00:09:19.170 fused_ordering(199) 00:09:19.170 fused_ordering(200) 00:09:19.170 fused_ordering(201) 00:09:19.170 fused_ordering(202) 00:09:19.170 fused_ordering(203) 00:09:19.170 fused_ordering(204) 00:09:19.170 fused_ordering(205) 00:09:19.170 fused_ordering(206) 00:09:19.170 fused_ordering(207) 00:09:19.170 fused_ordering(208) 00:09:19.170 fused_ordering(209) 00:09:19.170 fused_ordering(210) 00:09:19.170 fused_ordering(211) 00:09:19.170 fused_ordering(212) 00:09:19.170 fused_ordering(213) 00:09:19.170 fused_ordering(214) 00:09:19.170 fused_ordering(215) 00:09:19.170 fused_ordering(216) 00:09:19.170 fused_ordering(217) 00:09:19.170 fused_ordering(218) 00:09:19.170 fused_ordering(219) 00:09:19.170 fused_ordering(220) 00:09:19.170 fused_ordering(221) 00:09:19.170 fused_ordering(222) 00:09:19.170 fused_ordering(223) 00:09:19.170 fused_ordering(224) 00:09:19.170 fused_ordering(225) 00:09:19.170 fused_ordering(226) 00:09:19.170 fused_ordering(227) 00:09:19.170 fused_ordering(228) 00:09:19.170 fused_ordering(229) 00:09:19.170 fused_ordering(230) 00:09:19.170 fused_ordering(231) 00:09:19.170 fused_ordering(232) 00:09:19.170 fused_ordering(233) 00:09:19.170 fused_ordering(234) 00:09:19.170 fused_ordering(235) 00:09:19.170 fused_ordering(236) 00:09:19.170 fused_ordering(237) 00:09:19.170 fused_ordering(238) 00:09:19.170 fused_ordering(239) 00:09:19.170 fused_ordering(240) 00:09:19.170 fused_ordering(241) 00:09:19.170 fused_ordering(242) 00:09:19.170 fused_ordering(243) 00:09:19.170 fused_ordering(244) 00:09:19.170 fused_ordering(245) 00:09:19.170 fused_ordering(246) 00:09:19.170 fused_ordering(247) 00:09:19.170 fused_ordering(248) 00:09:19.170 fused_ordering(249) 00:09:19.170 fused_ordering(250) 00:09:19.170 fused_ordering(251) 00:09:19.170 fused_ordering(252) 00:09:19.170 fused_ordering(253) 00:09:19.170 fused_ordering(254) 00:09:19.170 fused_ordering(255) 00:09:19.170 fused_ordering(256) 00:09:19.170 fused_ordering(257) 00:09:19.170 fused_ordering(258) 00:09:19.170 fused_ordering(259) 00:09:19.170 fused_ordering(260) 00:09:19.170 fused_ordering(261) 00:09:19.170 fused_ordering(262) 00:09:19.170 fused_ordering(263) 00:09:19.170 fused_ordering(264) 00:09:19.170 fused_ordering(265) 00:09:19.170 fused_ordering(266) 00:09:19.170 fused_ordering(267) 00:09:19.170 fused_ordering(268) 00:09:19.170 fused_ordering(269) 00:09:19.170 fused_ordering(270) 00:09:19.170 fused_ordering(271) 00:09:19.170 fused_ordering(272) 00:09:19.170 fused_ordering(273) 00:09:19.170 fused_ordering(274) 00:09:19.170 fused_ordering(275) 00:09:19.171 fused_ordering(276) 00:09:19.171 fused_ordering(277) 00:09:19.171 fused_ordering(278) 00:09:19.171 fused_ordering(279) 00:09:19.171 fused_ordering(280) 00:09:19.171 fused_ordering(281) 00:09:19.171 fused_ordering(282) 00:09:19.171 fused_ordering(283) 00:09:19.171 fused_ordering(284) 00:09:19.171 fused_ordering(285) 00:09:19.171 fused_ordering(286) 00:09:19.171 fused_ordering(287) 00:09:19.171 fused_ordering(288) 00:09:19.171 fused_ordering(289) 00:09:19.171 fused_ordering(290) 00:09:19.171 fused_ordering(291) 00:09:19.171 fused_ordering(292) 00:09:19.171 fused_ordering(293) 00:09:19.171 fused_ordering(294) 00:09:19.171 fused_ordering(295) 00:09:19.171 fused_ordering(296) 00:09:19.171 fused_ordering(297) 00:09:19.171 fused_ordering(298) 00:09:19.171 fused_ordering(299) 00:09:19.171 fused_ordering(300) 00:09:19.171 fused_ordering(301) 00:09:19.171 fused_ordering(302) 00:09:19.171 fused_ordering(303) 00:09:19.171 fused_ordering(304) 00:09:19.171 fused_ordering(305) 00:09:19.171 fused_ordering(306) 00:09:19.171 fused_ordering(307) 00:09:19.171 fused_ordering(308) 00:09:19.171 fused_ordering(309) 00:09:19.171 fused_ordering(310) 00:09:19.171 fused_ordering(311) 00:09:19.171 fused_ordering(312) 00:09:19.171 fused_ordering(313) 00:09:19.171 fused_ordering(314) 00:09:19.171 fused_ordering(315) 00:09:19.171 fused_ordering(316) 00:09:19.171 fused_ordering(317) 00:09:19.171 fused_ordering(318) 00:09:19.171 fused_ordering(319) 00:09:19.171 fused_ordering(320) 00:09:19.171 fused_ordering(321) 00:09:19.171 fused_ordering(322) 00:09:19.171 fused_ordering(323) 00:09:19.171 fused_ordering(324) 00:09:19.171 fused_ordering(325) 00:09:19.171 fused_ordering(326) 00:09:19.171 fused_ordering(327) 00:09:19.171 fused_ordering(328) 00:09:19.171 fused_ordering(329) 00:09:19.171 fused_ordering(330) 00:09:19.171 fused_ordering(331) 00:09:19.171 fused_ordering(332) 00:09:19.171 fused_ordering(333) 00:09:19.171 fused_ordering(334) 00:09:19.171 fused_ordering(335) 00:09:19.171 fused_ordering(336) 00:09:19.171 fused_ordering(337) 00:09:19.171 fused_ordering(338) 00:09:19.171 fused_ordering(339) 00:09:19.171 fused_ordering(340) 00:09:19.171 fused_ordering(341) 00:09:19.171 fused_ordering(342) 00:09:19.171 fused_ordering(343) 00:09:19.171 fused_ordering(344) 00:09:19.171 fused_ordering(345) 00:09:19.171 fused_ordering(346) 00:09:19.171 fused_ordering(347) 00:09:19.171 fused_ordering(348) 00:09:19.171 fused_ordering(349) 00:09:19.171 fused_ordering(350) 00:09:19.171 fused_ordering(351) 00:09:19.171 fused_ordering(352) 00:09:19.171 fused_ordering(353) 00:09:19.171 fused_ordering(354) 00:09:19.171 fused_ordering(355) 00:09:19.171 fused_ordering(356) 00:09:19.171 fused_ordering(357) 00:09:19.171 fused_ordering(358) 00:09:19.171 fused_ordering(359) 00:09:19.171 fused_ordering(360) 00:09:19.171 fused_ordering(361) 00:09:19.171 fused_ordering(362) 00:09:19.171 fused_ordering(363) 00:09:19.171 fused_ordering(364) 00:09:19.171 fused_ordering(365) 00:09:19.171 fused_ordering(366) 00:09:19.171 fused_ordering(367) 00:09:19.171 fused_ordering(368) 00:09:19.171 fused_ordering(369) 00:09:19.171 fused_ordering(370) 00:09:19.171 fused_ordering(371) 00:09:19.171 fused_ordering(372) 00:09:19.171 fused_ordering(373) 00:09:19.171 fused_ordering(374) 00:09:19.171 fused_ordering(375) 00:09:19.171 fused_ordering(376) 00:09:19.171 fused_ordering(377) 00:09:19.171 fused_ordering(378) 00:09:19.171 fused_ordering(379) 00:09:19.171 fused_ordering(380) 00:09:19.171 fused_ordering(381) 00:09:19.171 fused_ordering(382) 00:09:19.171 fused_ordering(383) 00:09:19.171 fused_ordering(384) 00:09:19.171 fused_ordering(385) 00:09:19.171 fused_ordering(386) 00:09:19.171 fused_ordering(387) 00:09:19.171 fused_ordering(388) 00:09:19.171 fused_ordering(389) 00:09:19.171 fused_ordering(390) 00:09:19.171 fused_ordering(391) 00:09:19.171 fused_ordering(392) 00:09:19.171 fused_ordering(393) 00:09:19.171 fused_ordering(394) 00:09:19.171 fused_ordering(395) 00:09:19.171 fused_ordering(396) 00:09:19.171 fused_ordering(397) 00:09:19.171 fused_ordering(398) 00:09:19.171 fused_ordering(399) 00:09:19.171 fused_ordering(400) 00:09:19.171 fused_ordering(401) 00:09:19.171 fused_ordering(402) 00:09:19.171 fused_ordering(403) 00:09:19.171 fused_ordering(404) 00:09:19.171 fused_ordering(405) 00:09:19.171 fused_ordering(406) 00:09:19.171 fused_ordering(407) 00:09:19.171 fused_ordering(408) 00:09:19.171 fused_ordering(409) 00:09:19.171 fused_ordering(410) 00:09:19.430 fused_ordering(411) 00:09:19.430 fused_ordering(412) 00:09:19.430 fused_ordering(413) 00:09:19.430 fused_ordering(414) 00:09:19.430 fused_ordering(415) 00:09:19.430 fused_ordering(416) 00:09:19.430 fused_ordering(417) 00:09:19.430 fused_ordering(418) 00:09:19.430 fused_ordering(419) 00:09:19.430 fused_ordering(420) 00:09:19.430 fused_ordering(421) 00:09:19.430 fused_ordering(422) 00:09:19.430 fused_ordering(423) 00:09:19.430 fused_ordering(424) 00:09:19.430 fused_ordering(425) 00:09:19.430 fused_ordering(426) 00:09:19.430 fused_ordering(427) 00:09:19.430 fused_ordering(428) 00:09:19.430 fused_ordering(429) 00:09:19.430 fused_ordering(430) 00:09:19.430 fused_ordering(431) 00:09:19.430 fused_ordering(432) 00:09:19.430 fused_ordering(433) 00:09:19.430 fused_ordering(434) 00:09:19.430 fused_ordering(435) 00:09:19.430 fused_ordering(436) 00:09:19.430 fused_ordering(437) 00:09:19.430 fused_ordering(438) 00:09:19.430 fused_ordering(439) 00:09:19.430 fused_ordering(440) 00:09:19.430 fused_ordering(441) 00:09:19.430 fused_ordering(442) 00:09:19.430 fused_ordering(443) 00:09:19.430 fused_ordering(444) 00:09:19.430 fused_ordering(445) 00:09:19.430 fused_ordering(446) 00:09:19.430 fused_ordering(447) 00:09:19.430 fused_ordering(448) 00:09:19.430 fused_ordering(449) 00:09:19.430 fused_ordering(450) 00:09:19.430 fused_ordering(451) 00:09:19.430 fused_ordering(452) 00:09:19.430 fused_ordering(453) 00:09:19.430 fused_ordering(454) 00:09:19.430 fused_ordering(455) 00:09:19.430 fused_ordering(456) 00:09:19.430 fused_ordering(457) 00:09:19.430 fused_ordering(458) 00:09:19.430 fused_ordering(459) 00:09:19.430 fused_ordering(460) 00:09:19.430 fused_ordering(461) 00:09:19.430 fused_ordering(462) 00:09:19.430 fused_ordering(463) 00:09:19.430 fused_ordering(464) 00:09:19.430 fused_ordering(465) 00:09:19.430 fused_ordering(466) 00:09:19.430 fused_ordering(467) 00:09:19.430 fused_ordering(468) 00:09:19.430 fused_ordering(469) 00:09:19.430 fused_ordering(470) 00:09:19.430 fused_ordering(471) 00:09:19.430 fused_ordering(472) 00:09:19.430 fused_ordering(473) 00:09:19.430 fused_ordering(474) 00:09:19.430 fused_ordering(475) 00:09:19.430 fused_ordering(476) 00:09:19.430 fused_ordering(477) 00:09:19.430 fused_ordering(478) 00:09:19.430 fused_ordering(479) 00:09:19.430 fused_ordering(480) 00:09:19.430 fused_ordering(481) 00:09:19.430 fused_ordering(482) 00:09:19.430 fused_ordering(483) 00:09:19.430 fused_ordering(484) 00:09:19.430 fused_ordering(485) 00:09:19.430 fused_ordering(486) 00:09:19.430 fused_ordering(487) 00:09:19.430 fused_ordering(488) 00:09:19.430 fused_ordering(489) 00:09:19.430 fused_ordering(490) 00:09:19.430 fused_ordering(491) 00:09:19.430 fused_ordering(492) 00:09:19.430 fused_ordering(493) 00:09:19.430 fused_ordering(494) 00:09:19.430 fused_ordering(495) 00:09:19.430 fused_ordering(496) 00:09:19.430 fused_ordering(497) 00:09:19.430 fused_ordering(498) 00:09:19.430 fused_ordering(499) 00:09:19.430 fused_ordering(500) 00:09:19.430 fused_ordering(501) 00:09:19.430 fused_ordering(502) 00:09:19.430 fused_ordering(503) 00:09:19.430 fused_ordering(504) 00:09:19.430 fused_ordering(505) 00:09:19.430 fused_ordering(506) 00:09:19.430 fused_ordering(507) 00:09:19.430 fused_ordering(508) 00:09:19.430 fused_ordering(509) 00:09:19.430 fused_ordering(510) 00:09:19.430 fused_ordering(511) 00:09:19.430 fused_ordering(512) 00:09:19.430 fused_ordering(513) 00:09:19.430 fused_ordering(514) 00:09:19.430 fused_ordering(515) 00:09:19.430 fused_ordering(516) 00:09:19.430 fused_ordering(517) 00:09:19.430 fused_ordering(518) 00:09:19.430 fused_ordering(519) 00:09:19.430 fused_ordering(520) 00:09:19.430 fused_ordering(521) 00:09:19.430 fused_ordering(522) 00:09:19.430 fused_ordering(523) 00:09:19.430 fused_ordering(524) 00:09:19.430 fused_ordering(525) 00:09:19.430 fused_ordering(526) 00:09:19.430 fused_ordering(527) 00:09:19.430 fused_ordering(528) 00:09:19.430 fused_ordering(529) 00:09:19.430 fused_ordering(530) 00:09:19.430 fused_ordering(531) 00:09:19.430 fused_ordering(532) 00:09:19.430 fused_ordering(533) 00:09:19.430 fused_ordering(534) 00:09:19.430 fused_ordering(535) 00:09:19.430 fused_ordering(536) 00:09:19.430 fused_ordering(537) 00:09:19.431 fused_ordering(538) 00:09:19.431 fused_ordering(539) 00:09:19.431 fused_ordering(540) 00:09:19.431 fused_ordering(541) 00:09:19.431 fused_ordering(542) 00:09:19.431 fused_ordering(543) 00:09:19.431 fused_ordering(544) 00:09:19.431 fused_ordering(545) 00:09:19.431 fused_ordering(546) 00:09:19.431 fused_ordering(547) 00:09:19.431 fused_ordering(548) 00:09:19.431 fused_ordering(549) 00:09:19.431 fused_ordering(550) 00:09:19.431 fused_ordering(551) 00:09:19.431 fused_ordering(552) 00:09:19.431 fused_ordering(553) 00:09:19.431 fused_ordering(554) 00:09:19.431 fused_ordering(555) 00:09:19.431 fused_ordering(556) 00:09:19.431 fused_ordering(557) 00:09:19.431 fused_ordering(558) 00:09:19.431 fused_ordering(559) 00:09:19.431 fused_ordering(560) 00:09:19.431 fused_ordering(561) 00:09:19.431 fused_ordering(562) 00:09:19.431 fused_ordering(563) 00:09:19.431 fused_ordering(564) 00:09:19.431 fused_ordering(565) 00:09:19.431 fused_ordering(566) 00:09:19.431 fused_ordering(567) 00:09:19.431 fused_ordering(568) 00:09:19.431 fused_ordering(569) 00:09:19.431 fused_ordering(570) 00:09:19.431 fused_ordering(571) 00:09:19.431 fused_ordering(572) 00:09:19.431 fused_ordering(573) 00:09:19.431 fused_ordering(574) 00:09:19.431 fused_ordering(575) 00:09:19.431 fused_ordering(576) 00:09:19.431 fused_ordering(577) 00:09:19.431 fused_ordering(578) 00:09:19.431 fused_ordering(579) 00:09:19.431 fused_ordering(580) 00:09:19.431 fused_ordering(581) 00:09:19.431 fused_ordering(582) 00:09:19.431 fused_ordering(583) 00:09:19.431 fused_ordering(584) 00:09:19.431 fused_ordering(585) 00:09:19.431 fused_ordering(586) 00:09:19.431 fused_ordering(587) 00:09:19.431 fused_ordering(588) 00:09:19.431 fused_ordering(589) 00:09:19.431 fused_ordering(590) 00:09:19.431 fused_ordering(591) 00:09:19.431 fused_ordering(592) 00:09:19.431 fused_ordering(593) 00:09:19.431 fused_ordering(594) 00:09:19.431 fused_ordering(595) 00:09:19.431 fused_ordering(596) 00:09:19.431 fused_ordering(597) 00:09:19.431 fused_ordering(598) 00:09:19.431 fused_ordering(599) 00:09:19.431 fused_ordering(600) 00:09:19.431 fused_ordering(601) 00:09:19.431 fused_ordering(602) 00:09:19.431 fused_ordering(603) 00:09:19.431 fused_ordering(604) 00:09:19.431 fused_ordering(605) 00:09:19.431 fused_ordering(606) 00:09:19.431 fused_ordering(607) 00:09:19.431 fused_ordering(608) 00:09:19.431 fused_ordering(609) 00:09:19.431 fused_ordering(610) 00:09:19.431 fused_ordering(611) 00:09:19.431 fused_ordering(612) 00:09:19.431 fused_ordering(613) 00:09:19.431 fused_ordering(614) 00:09:19.431 fused_ordering(615) 00:09:19.431 fused_ordering(616) 00:09:19.431 fused_ordering(617) 00:09:19.431 fused_ordering(618) 00:09:19.431 fused_ordering(619) 00:09:19.431 fused_ordering(620) 00:09:19.431 fused_ordering(621) 00:09:19.431 fused_ordering(622) 00:09:19.431 fused_ordering(623) 00:09:19.431 fused_ordering(624) 00:09:19.431 fused_ordering(625) 00:09:19.431 fused_ordering(626) 00:09:19.431 fused_ordering(627) 00:09:19.431 fused_ordering(628) 00:09:19.431 fused_ordering(629) 00:09:19.431 fused_ordering(630) 00:09:19.431 fused_ordering(631) 00:09:19.431 fused_ordering(632) 00:09:19.431 fused_ordering(633) 00:09:19.431 fused_ordering(634) 00:09:19.431 fused_ordering(635) 00:09:19.431 fused_ordering(636) 00:09:19.431 fused_ordering(637) 00:09:19.431 fused_ordering(638) 00:09:19.431 fused_ordering(639) 00:09:19.431 fused_ordering(640) 00:09:19.431 fused_ordering(641) 00:09:19.431 fused_ordering(642) 00:09:19.431 fused_ordering(643) 00:09:19.431 fused_ordering(644) 00:09:19.431 fused_ordering(645) 00:09:19.431 fused_ordering(646) 00:09:19.431 fused_ordering(647) 00:09:19.431 fused_ordering(648) 00:09:19.431 fused_ordering(649) 00:09:19.431 fused_ordering(650) 00:09:19.431 fused_ordering(651) 00:09:19.431 fused_ordering(652) 00:09:19.431 fused_ordering(653) 00:09:19.431 fused_ordering(654) 00:09:19.431 fused_ordering(655) 00:09:19.431 fused_ordering(656) 00:09:19.431 fused_ordering(657) 00:09:19.431 fused_ordering(658) 00:09:19.431 fused_ordering(659) 00:09:19.431 fused_ordering(660) 00:09:19.431 fused_ordering(661) 00:09:19.431 fused_ordering(662) 00:09:19.431 fused_ordering(663) 00:09:19.431 fused_ordering(664) 00:09:19.431 fused_ordering(665) 00:09:19.431 fused_ordering(666) 00:09:19.431 fused_ordering(667) 00:09:19.431 fused_ordering(668) 00:09:19.431 fused_ordering(669) 00:09:19.431 fused_ordering(670) 00:09:19.431 fused_ordering(671) 00:09:19.431 fused_ordering(672) 00:09:19.431 fused_ordering(673) 00:09:19.431 fused_ordering(674) 00:09:19.431 fused_ordering(675) 00:09:19.431 fused_ordering(676) 00:09:19.431 fused_ordering(677) 00:09:19.431 fused_ordering(678) 00:09:19.431 fused_ordering(679) 00:09:19.431 fused_ordering(680) 00:09:19.431 fused_ordering(681) 00:09:19.431 fused_ordering(682) 00:09:19.431 fused_ordering(683) 00:09:19.431 fused_ordering(684) 00:09:19.431 fused_ordering(685) 00:09:19.431 fused_ordering(686) 00:09:19.431 fused_ordering(687) 00:09:19.431 fused_ordering(688) 00:09:19.431 fused_ordering(689) 00:09:19.431 fused_ordering(690) 00:09:19.431 fused_ordering(691) 00:09:19.431 fused_ordering(692) 00:09:19.431 fused_ordering(693) 00:09:19.431 fused_ordering(694) 00:09:19.431 fused_ordering(695) 00:09:19.431 fused_ordering(696) 00:09:19.431 fused_ordering(697) 00:09:19.431 fused_ordering(698) 00:09:19.431 fused_ordering(699) 00:09:19.431 fused_ordering(700) 00:09:19.431 fused_ordering(701) 00:09:19.431 fused_ordering(702) 00:09:19.431 fused_ordering(703) 00:09:19.431 fused_ordering(704) 00:09:19.431 fused_ordering(705) 00:09:19.431 fused_ordering(706) 00:09:19.431 fused_ordering(707) 00:09:19.431 fused_ordering(708) 00:09:19.431 fused_ordering(709) 00:09:19.431 fused_ordering(710) 00:09:19.431 fused_ordering(711) 00:09:19.431 fused_ordering(712) 00:09:19.431 fused_ordering(713) 00:09:19.431 fused_ordering(714) 00:09:19.431 fused_ordering(715) 00:09:19.431 fused_ordering(716) 00:09:19.431 fused_ordering(717) 00:09:19.431 fused_ordering(718) 00:09:19.431 fused_ordering(719) 00:09:19.431 fused_ordering(720) 00:09:19.431 fused_ordering(721) 00:09:19.431 fused_ordering(722) 00:09:19.431 fused_ordering(723) 00:09:19.431 fused_ordering(724) 00:09:19.431 fused_ordering(725) 00:09:19.431 fused_ordering(726) 00:09:19.431 fused_ordering(727) 00:09:19.431 fused_ordering(728) 00:09:19.431 fused_ordering(729) 00:09:19.431 fused_ordering(730) 00:09:19.431 fused_ordering(731) 00:09:19.431 fused_ordering(732) 00:09:19.431 fused_ordering(733) 00:09:19.431 fused_ordering(734) 00:09:19.431 fused_ordering(735) 00:09:19.431 fused_ordering(736) 00:09:19.431 fused_ordering(737) 00:09:19.431 fused_ordering(738) 00:09:19.431 fused_ordering(739) 00:09:19.431 fused_ordering(740) 00:09:19.431 fused_ordering(741) 00:09:19.431 fused_ordering(742) 00:09:19.431 fused_ordering(743) 00:09:19.431 fused_ordering(744) 00:09:19.431 fused_ordering(745) 00:09:19.432 fused_ordering(746) 00:09:19.432 fused_ordering(747) 00:09:19.432 fused_ordering(748) 00:09:19.432 fused_ordering(749) 00:09:19.432 fused_ordering(750) 00:09:19.432 fused_ordering(751) 00:09:19.432 fused_ordering(752) 00:09:19.432 fused_ordering(753) 00:09:19.432 fused_ordering(754) 00:09:19.432 fused_ordering(755) 00:09:19.432 fused_ordering(756) 00:09:19.432 fused_ordering(757) 00:09:19.432 fused_ordering(758) 00:09:19.432 fused_ordering(759) 00:09:19.432 fused_ordering(760) 00:09:19.432 fused_ordering(761) 00:09:19.432 fused_ordering(762) 00:09:19.432 fused_ordering(763) 00:09:19.432 fused_ordering(764) 00:09:19.432 fused_ordering(765) 00:09:19.432 fused_ordering(766) 00:09:19.432 fused_ordering(767) 00:09:19.432 fused_ordering(768) 00:09:19.432 fused_ordering(769) 00:09:19.432 fused_ordering(770) 00:09:19.432 fused_ordering(771) 00:09:19.432 fused_ordering(772) 00:09:19.432 fused_ordering(773) 00:09:19.432 fused_ordering(774) 00:09:19.432 fused_ordering(775) 00:09:19.432 fused_ordering(776) 00:09:19.432 fused_ordering(777) 00:09:19.432 fused_ordering(778) 00:09:19.432 fused_ordering(779) 00:09:19.432 fused_ordering(780) 00:09:19.432 fused_ordering(781) 00:09:19.432 fused_ordering(782) 00:09:19.432 fused_ordering(783) 00:09:19.432 fused_ordering(784) 00:09:19.432 fused_ordering(785) 00:09:19.432 fused_ordering(786) 00:09:19.432 fused_ordering(787) 00:09:19.432 fused_ordering(788) 00:09:19.432 fused_ordering(789) 00:09:19.432 fused_ordering(790) 00:09:19.432 fused_ordering(791) 00:09:19.432 fused_ordering(792) 00:09:19.432 fused_ordering(793) 00:09:19.432 fused_ordering(794) 00:09:19.432 fused_ordering(795) 00:09:19.432 fused_ordering(796) 00:09:19.432 fused_ordering(797) 00:09:19.432 fused_ordering(798) 00:09:19.432 fused_ordering(799) 00:09:19.432 fused_ordering(800) 00:09:19.432 fused_ordering(801) 00:09:19.432 fused_ordering(802) 00:09:19.432 fused_ordering(803) 00:09:19.432 fused_ordering(804) 00:09:19.432 fused_ordering(805) 00:09:19.432 fused_ordering(806) 00:09:19.432 fused_ordering(807) 00:09:19.432 fused_ordering(808) 00:09:19.432 fused_ordering(809) 00:09:19.432 fused_ordering(810) 00:09:19.432 fused_ordering(811) 00:09:19.432 fused_ordering(812) 00:09:19.432 fused_ordering(813) 00:09:19.432 fused_ordering(814) 00:09:19.432 fused_ordering(815) 00:09:19.432 fused_ordering(816) 00:09:19.432 fused_ordering(817) 00:09:19.432 fused_ordering(818) 00:09:19.432 fused_ordering(819) 00:09:19.432 fused_ordering(820) 00:09:19.691 fused_ordering(821) 00:09:19.691 fused_ordering(822) 00:09:19.691 fused_ordering(823) 00:09:19.691 fused_ordering(824) 00:09:19.691 fused_ordering(825) 00:09:19.691 fused_ordering(826) 00:09:19.691 fused_ordering(827) 00:09:19.691 fused_ordering(828) 00:09:19.691 fused_ordering(829) 00:09:19.691 fused_ordering(830) 00:09:19.691 fused_ordering(831) 00:09:19.691 fused_ordering(832) 00:09:19.691 fused_ordering(833) 00:09:19.691 fused_ordering(834) 00:09:19.691 fused_ordering(835) 00:09:19.691 fused_ordering(836) 00:09:19.691 fused_ordering(837) 00:09:19.691 fused_ordering(838) 00:09:19.691 fused_ordering(839) 00:09:19.691 fused_ordering(840) 00:09:19.691 fused_ordering(841) 00:09:19.691 fused_ordering(842) 00:09:19.691 fused_ordering(843) 00:09:19.691 fused_ordering(844) 00:09:19.691 fused_ordering(845) 00:09:19.691 fused_ordering(846) 00:09:19.691 fused_ordering(847) 00:09:19.691 fused_ordering(848) 00:09:19.691 fused_ordering(849) 00:09:19.691 fused_ordering(850) 00:09:19.691 fused_ordering(851) 00:09:19.691 fused_ordering(852) 00:09:19.691 fused_ordering(853) 00:09:19.691 fused_ordering(854) 00:09:19.691 fused_ordering(855) 00:09:19.691 fused_ordering(856) 00:09:19.691 fused_ordering(857) 00:09:19.691 fused_ordering(858) 00:09:19.691 fused_ordering(859) 00:09:19.691 fused_ordering(860) 00:09:19.691 fused_ordering(861) 00:09:19.691 fused_ordering(862) 00:09:19.691 fused_ordering(863) 00:09:19.691 fused_ordering(864) 00:09:19.691 fused_ordering(865) 00:09:19.691 fused_ordering(866) 00:09:19.691 fused_ordering(867) 00:09:19.691 fused_ordering(868) 00:09:19.691 fused_ordering(869) 00:09:19.691 fused_ordering(870) 00:09:19.691 fused_ordering(871) 00:09:19.691 fused_ordering(872) 00:09:19.691 fused_ordering(873) 00:09:19.691 fused_ordering(874) 00:09:19.691 fused_ordering(875) 00:09:19.691 fused_ordering(876) 00:09:19.691 fused_ordering(877) 00:09:19.691 fused_ordering(878) 00:09:19.691 fused_ordering(879) 00:09:19.691 fused_ordering(880) 00:09:19.691 fused_ordering(881) 00:09:19.691 fused_ordering(882) 00:09:19.691 fused_ordering(883) 00:09:19.691 fused_ordering(884) 00:09:19.691 fused_ordering(885) 00:09:19.691 fused_ordering(886) 00:09:19.691 fused_ordering(887) 00:09:19.691 fused_ordering(888) 00:09:19.691 fused_ordering(889) 00:09:19.691 fused_ordering(890) 00:09:19.691 fused_ordering(891) 00:09:19.691 fused_ordering(892) 00:09:19.691 fused_ordering(893) 00:09:19.691 fused_ordering(894) 00:09:19.691 fused_ordering(895) 00:09:19.691 fused_ordering(896) 00:09:19.691 fused_ordering(897) 00:09:19.691 fused_ordering(898) 00:09:19.691 fused_ordering(899) 00:09:19.691 fused_ordering(900) 00:09:19.691 fused_ordering(901) 00:09:19.691 fused_ordering(902) 00:09:19.691 fused_ordering(903) 00:09:19.691 fused_ordering(904) 00:09:19.691 fused_ordering(905) 00:09:19.691 fused_ordering(906) 00:09:19.691 fused_ordering(907) 00:09:19.691 fused_ordering(908) 00:09:19.691 fused_ordering(909) 00:09:19.691 fused_ordering(910) 00:09:19.691 fused_ordering(911) 00:09:19.691 fused_ordering(912) 00:09:19.691 fused_ordering(913) 00:09:19.691 fused_ordering(914) 00:09:19.691 fused_ordering(915) 00:09:19.691 fused_ordering(916) 00:09:19.691 fused_ordering(917) 00:09:19.691 fused_ordering(918) 00:09:19.691 fused_ordering(919) 00:09:19.691 fused_ordering(920) 00:09:19.691 fused_ordering(921) 00:09:19.691 fused_ordering(922) 00:09:19.691 fused_ordering(923) 00:09:19.691 fused_ordering(924) 00:09:19.691 fused_ordering(925) 00:09:19.691 fused_ordering(926) 00:09:19.691 fused_ordering(927) 00:09:19.691 fused_ordering(928) 00:09:19.691 fused_ordering(929) 00:09:19.691 fused_ordering(930) 00:09:19.691 fused_ordering(931) 00:09:19.691 fused_ordering(932) 00:09:19.691 fused_ordering(933) 00:09:19.691 fused_ordering(934) 00:09:19.691 fused_ordering(935) 00:09:19.691 fused_ordering(936) 00:09:19.691 fused_ordering(937) 00:09:19.691 fused_ordering(938) 00:09:19.691 fused_ordering(939) 00:09:19.691 fused_ordering(940) 00:09:19.691 fused_ordering(941) 00:09:19.691 fused_ordering(942) 00:09:19.691 fused_ordering(943) 00:09:19.691 fused_ordering(944) 00:09:19.691 fused_ordering(945) 00:09:19.691 fused_ordering(946) 00:09:19.691 fused_ordering(947) 00:09:19.691 fused_ordering(948) 00:09:19.691 fused_ordering(949) 00:09:19.691 fused_ordering(950) 00:09:19.692 fused_ordering(951) 00:09:19.692 fused_ordering(952) 00:09:19.692 fused_ordering(953) 00:09:19.692 fused_ordering(954) 00:09:19.692 fused_ordering(955) 00:09:19.692 fused_ordering(956) 00:09:19.692 fused_ordering(957) 00:09:19.692 fused_ordering(958) 00:09:19.692 fused_ordering(959) 00:09:19.692 fused_ordering(960) 00:09:19.692 fused_ordering(961) 00:09:19.692 fused_ordering(962) 00:09:19.692 fused_ordering(963) 00:09:19.692 fused_ordering(964) 00:09:19.692 fused_ordering(965) 00:09:19.692 fused_ordering(966) 00:09:19.692 fused_ordering(967) 00:09:19.692 fused_ordering(968) 00:09:19.692 fused_ordering(969) 00:09:19.692 fused_ordering(970) 00:09:19.692 fused_ordering(971) 00:09:19.692 fused_ordering(972) 00:09:19.692 fused_ordering(973) 00:09:19.692 fused_ordering(974) 00:09:19.692 fused_ordering(975) 00:09:19.692 fused_ordering(976) 00:09:19.692 fused_ordering(977) 00:09:19.692 fused_ordering(978) 00:09:19.692 fused_ordering(979) 00:09:19.692 fused_ordering(980) 00:09:19.692 fused_ordering(981) 00:09:19.692 fused_ordering(982) 00:09:19.692 fused_ordering(983) 00:09:19.692 fused_ordering(984) 00:09:19.692 fused_ordering(985) 00:09:19.692 fused_ordering(986) 00:09:19.692 fused_ordering(987) 00:09:19.692 fused_ordering(988) 00:09:19.692 fused_ordering(989) 00:09:19.692 fused_ordering(990) 00:09:19.692 fused_ordering(991) 00:09:19.692 fused_ordering(992) 00:09:19.692 fused_ordering(993) 00:09:19.692 fused_ordering(994) 00:09:19.692 fused_ordering(995) 00:09:19.692 fused_ordering(996) 00:09:19.692 fused_ordering(997) 00:09:19.692 fused_ordering(998) 00:09:19.692 fused_ordering(999) 00:09:19.692 fused_ordering(1000) 00:09:19.692 fused_ordering(1001) 00:09:19.692 fused_ordering(1002) 00:09:19.692 fused_ordering(1003) 00:09:19.692 fused_ordering(1004) 00:09:19.692 fused_ordering(1005) 00:09:19.692 fused_ordering(1006) 00:09:19.692 fused_ordering(1007) 00:09:19.692 fused_ordering(1008) 00:09:19.692 fused_ordering(1009) 00:09:19.692 fused_ordering(1010) 00:09:19.692 fused_ordering(1011) 00:09:19.692 fused_ordering(1012) 00:09:19.692 fused_ordering(1013) 00:09:19.692 fused_ordering(1014) 00:09:19.692 fused_ordering(1015) 00:09:19.692 fused_ordering(1016) 00:09:19.692 fused_ordering(1017) 00:09:19.692 fused_ordering(1018) 00:09:19.692 fused_ordering(1019) 00:09:19.692 fused_ordering(1020) 00:09:19.692 fused_ordering(1021) 00:09:19.692 fused_ordering(1022) 00:09:19.692 fused_ordering(1023) 00:09:19.692 17:25:46 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:19.692 17:25:46 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:19.692 17:25:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:19.692 17:25:46 -- nvmf/common.sh@117 -- # sync 00:09:19.692 17:25:46 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:19.692 17:25:46 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:19.692 17:25:46 -- nvmf/common.sh@120 -- # set +e 00:09:19.692 17:25:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.692 17:25:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:19.692 rmmod nvme_rdma 00:09:19.692 rmmod nvme_fabrics 00:09:19.692 17:25:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.692 17:25:46 -- nvmf/common.sh@124 -- # set -e 00:09:19.692 17:25:46 -- nvmf/common.sh@125 -- # return 0 00:09:19.692 17:25:46 -- nvmf/common.sh@478 -- # '[' -n 1953449 ']' 00:09:19.692 17:25:46 -- nvmf/common.sh@479 -- # killprocess 1953449 00:09:19.692 17:25:46 -- common/autotest_common.sh@936 -- # '[' -z 1953449 ']' 00:09:19.692 17:25:46 -- common/autotest_common.sh@940 -- # kill -0 1953449 00:09:19.692 17:25:46 -- common/autotest_common.sh@941 -- # uname 00:09:19.692 17:25:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:19.692 17:25:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1953449 00:09:19.692 17:25:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:19.692 17:25:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:19.692 17:25:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1953449' 00:09:19.692 killing process with pid 1953449 00:09:19.692 17:25:46 -- common/autotest_common.sh@955 -- # kill 1953449 00:09:19.692 17:25:46 -- common/autotest_common.sh@960 -- # wait 1953449 00:09:20.259 17:25:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:20.259 17:25:46 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:20.259 00:09:20.259 real 0m3.896s 00:09:20.259 user 0m3.370s 00:09:20.259 sys 0m1.762s 00:09:20.259 17:25:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:20.259 17:25:46 -- common/autotest_common.sh@10 -- # set +x 00:09:20.259 ************************************ 00:09:20.259 END TEST nvmf_fused_ordering 00:09:20.259 ************************************ 00:09:20.259 17:25:46 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:20.259 17:25:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:20.259 17:25:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.259 17:25:46 -- common/autotest_common.sh@10 -- # set +x 00:09:20.259 ************************************ 00:09:20.259 START TEST nvmf_delete_subsystem 00:09:20.259 ************************************ 00:09:20.259 17:25:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:09:20.259 * Looking for test storage... 00:09:20.259 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:20.259 17:25:46 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.259 17:25:46 -- nvmf/common.sh@7 -- # uname -s 00:09:20.259 17:25:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.259 17:25:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.259 17:25:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.259 17:25:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.259 17:25:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.259 17:25:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.259 17:25:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.259 17:25:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.259 17:25:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.259 17:25:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.259 17:25:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:20.259 17:25:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:20.259 17:25:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.259 17:25:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.259 17:25:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.259 17:25:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.259 17:25:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:20.259 17:25:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.259 17:25:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.259 17:25:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.259 17:25:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.259 17:25:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.259 17:25:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.259 17:25:46 -- paths/export.sh@5 -- # export PATH 00:09:20.259 17:25:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.259 17:25:46 -- nvmf/common.sh@47 -- # : 0 00:09:20.259 17:25:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.259 17:25:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.259 17:25:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.259 17:25:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.259 17:25:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.259 17:25:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.259 17:25:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.259 17:25:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.259 17:25:46 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:20.259 17:25:46 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:20.259 17:25:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.259 17:25:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:20.259 17:25:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:20.259 17:25:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:20.259 17:25:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.259 17:25:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.259 17:25:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.259 17:25:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:20.259 17:25:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:20.259 17:25:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:20.259 17:25:46 -- common/autotest_common.sh@10 -- # set +x 00:09:22.164 17:25:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:22.164 17:25:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:22.164 17:25:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:22.164 17:25:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:22.164 17:25:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:22.164 17:25:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:22.164 17:25:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:22.164 17:25:48 -- nvmf/common.sh@295 -- # net_devs=() 00:09:22.164 17:25:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:22.164 17:25:48 -- nvmf/common.sh@296 -- # e810=() 00:09:22.165 17:25:48 -- nvmf/common.sh@296 -- # local -ga e810 00:09:22.165 17:25:48 -- nvmf/common.sh@297 -- # x722=() 00:09:22.165 17:25:48 -- nvmf/common.sh@297 -- # local -ga x722 00:09:22.165 17:25:48 -- nvmf/common.sh@298 -- # mlx=() 00:09:22.165 17:25:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:22.165 17:25:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.165 17:25:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.165 17:25:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.165 17:25:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.165 17:25:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.165 17:25:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.165 17:25:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.165 17:25:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.165 17:25:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.165 17:25:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.165 17:25:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.165 17:25:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:22.165 17:25:48 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:22.165 17:25:48 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:22.165 17:25:48 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:22.165 17:25:48 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:22.165 17:25:48 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:22.165 17:25:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:22.165 17:25:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.165 17:25:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:09:22.165 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:09:22.165 17:25:48 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:22.165 17:25:48 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:22.165 17:25:48 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:22.165 17:25:48 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:22.165 17:25:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.165 17:25:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:09:22.165 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:09:22.165 17:25:48 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:22.165 17:25:48 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:22.165 17:25:48 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:22.165 17:25:48 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:22.165 17:25:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:22.165 17:25:48 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:22.165 17:25:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.165 17:25:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.165 17:25:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:22.165 17:25:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.165 17:25:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:09:22.165 Found net devices under 0000:09:00.0: mlx_0_0 00:09:22.165 17:25:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.165 17:25:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.165 17:25:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.165 17:25:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:22.165 17:25:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.165 17:25:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:09:22.165 Found net devices under 0000:09:00.1: mlx_0_1 00:09:22.165 17:25:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.165 17:25:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:22.165 17:25:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:22.165 17:25:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:22.165 17:25:48 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:22.165 17:25:48 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:22.165 17:25:48 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:22.165 17:25:48 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:22.165 17:25:48 -- nvmf/common.sh@58 -- # uname 00:09:22.165 17:25:48 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:22.165 17:25:48 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:22.165 17:25:48 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:22.165 17:25:48 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:22.165 17:25:48 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:22.165 17:25:48 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:22.165 17:25:48 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:22.165 17:25:48 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:22.165 17:25:48 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:22.165 17:25:48 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:22.165 17:25:48 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:22.165 17:25:48 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:22.165 17:25:48 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:22.165 17:25:48 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:22.165 17:25:48 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:22.426 17:25:48 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:22.426 17:25:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:22.426 17:25:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.426 17:25:48 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:22.426 17:25:48 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:22.426 17:25:48 -- nvmf/common.sh@105 -- # continue 2 00:09:22.426 17:25:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:22.426 17:25:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.426 17:25:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:22.426 17:25:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.426 17:25:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:22.426 17:25:48 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:22.426 17:25:48 -- nvmf/common.sh@105 -- # continue 2 00:09:22.426 17:25:48 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:22.426 17:25:48 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:22.426 17:25:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:22.426 17:25:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:22.426 17:25:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:22.426 17:25:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:22.426 17:25:48 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:22.426 17:25:48 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:22.426 17:25:48 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:22.426 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:22.426 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:09:22.426 altname enp9s0f0np0 00:09:22.426 inet 192.168.100.8/24 scope global mlx_0_0 00:09:22.426 valid_lft forever preferred_lft forever 00:09:22.426 17:25:48 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:22.426 17:25:48 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:22.426 17:25:48 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:22.426 17:25:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:22.426 17:25:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:22.426 17:25:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:22.426 17:25:48 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:22.426 17:25:48 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:22.426 17:25:48 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:22.426 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:22.426 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:09:22.426 altname enp9s0f1np1 00:09:22.426 inet 192.168.100.9/24 scope global mlx_0_1 00:09:22.426 valid_lft forever preferred_lft forever 00:09:22.426 17:25:48 -- nvmf/common.sh@411 -- # return 0 00:09:22.426 17:25:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:22.426 17:25:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:22.426 17:25:48 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:22.426 17:25:48 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:22.426 17:25:48 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:22.426 17:25:48 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:22.426 17:25:48 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:22.426 17:25:48 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:22.426 17:25:48 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:22.426 17:25:48 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:22.426 17:25:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:22.426 17:25:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.426 17:25:48 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:22.426 17:25:48 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:22.426 17:25:48 -- nvmf/common.sh@105 -- # continue 2 00:09:22.426 17:25:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:22.426 17:25:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.426 17:25:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:22.426 17:25:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:22.426 17:25:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:22.426 17:25:48 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:22.426 17:25:48 -- nvmf/common.sh@105 -- # continue 2 00:09:22.426 17:25:48 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:22.426 17:25:48 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:22.426 17:25:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:22.426 17:25:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:22.426 17:25:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:22.426 17:25:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:22.426 17:25:48 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:22.426 17:25:48 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:22.426 17:25:48 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:22.426 17:25:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:22.426 17:25:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:22.426 17:25:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:22.426 17:25:48 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:22.426 192.168.100.9' 00:09:22.426 17:25:48 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:22.426 192.168.100.9' 00:09:22.426 17:25:48 -- nvmf/common.sh@446 -- # head -n 1 00:09:22.426 17:25:48 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:22.426 17:25:48 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:22.426 192.168.100.9' 00:09:22.426 17:25:48 -- nvmf/common.sh@447 -- # tail -n +2 00:09:22.426 17:25:48 -- nvmf/common.sh@447 -- # head -n 1 00:09:22.426 17:25:48 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:22.426 17:25:48 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:22.426 17:25:48 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:22.426 17:25:48 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:22.426 17:25:48 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:22.426 17:25:48 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:22.426 17:25:48 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:22.427 17:25:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:22.427 17:25:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:22.427 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:09:22.427 17:25:48 -- nvmf/common.sh@470 -- # nvmfpid=1955390 00:09:22.427 17:25:48 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:22.427 17:25:48 -- nvmf/common.sh@471 -- # waitforlisten 1955390 00:09:22.427 17:25:48 -- common/autotest_common.sh@817 -- # '[' -z 1955390 ']' 00:09:22.427 17:25:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.427 17:25:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:22.427 17:25:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.427 17:25:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:22.427 17:25:48 -- common/autotest_common.sh@10 -- # set +x 00:09:22.427 [2024-04-18 17:25:48.834933] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:09:22.427 [2024-04-18 17:25:48.835013] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.427 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.427 [2024-04-18 17:25:48.891994] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:22.686 [2024-04-18 17:25:49.003116] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.686 [2024-04-18 17:25:49.003174] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.686 [2024-04-18 17:25:49.003203] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.686 [2024-04-18 17:25:49.003214] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.686 [2024-04-18 17:25:49.003224] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.686 [2024-04-18 17:25:49.003312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.686 [2024-04-18 17:25:49.003316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.686 17:25:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:22.686 17:25:49 -- common/autotest_common.sh@850 -- # return 0 00:09:22.686 17:25:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:22.686 17:25:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:22.686 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:09:22.686 17:25:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.686 17:25:49 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:22.686 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.686 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:09:22.686 [2024-04-18 17:25:49.177719] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xed4530/0xed8a20) succeed. 00:09:22.686 [2024-04-18 17:25:49.189489] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xed5a30/0xf1a0b0) succeed. 00:09:22.945 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.945 17:25:49 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:22.945 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.945 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:09:22.945 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.945 17:25:49 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:22.945 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.945 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:09:22.945 [2024-04-18 17:25:49.289110] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:22.945 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.945 17:25:49 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:22.945 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.945 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:09:22.945 NULL1 00:09:22.945 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.945 17:25:49 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:22.945 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.945 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:09:22.945 Delay0 00:09:22.945 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.945 17:25:49 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.945 17:25:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.945 17:25:49 -- common/autotest_common.sh@10 -- # set +x 00:09:22.945 17:25:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.945 17:25:49 -- target/delete_subsystem.sh@28 -- # perf_pid=1955425 00:09:22.945 17:25:49 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:22.945 17:25:49 -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:22.945 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.946 [2024-04-18 17:25:49.387842] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:24.846 17:25:51 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.846 17:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.846 17:25:51 -- common/autotest_common.sh@10 -- # set +x 00:09:26.218 NVMe io qpair process completion error 00:09:26.218 NVMe io qpair process completion error 00:09:26.218 NVMe io qpair process completion error 00:09:26.218 NVMe io qpair process completion error 00:09:26.218 NVMe io qpair process completion error 00:09:26.218 NVMe io qpair process completion error 00:09:26.218 17:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:26.218 17:25:52 -- target/delete_subsystem.sh@34 -- # delay=0 00:09:26.218 17:25:52 -- target/delete_subsystem.sh@35 -- # kill -0 1955425 00:09:26.218 17:25:52 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:26.475 17:25:52 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:26.475 17:25:52 -- target/delete_subsystem.sh@35 -- # kill -0 1955425 00:09:26.475 17:25:52 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 starting I/O failed: -6 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Write completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.041 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Read completed with error (sct=0, sc=8) 00:09:27.042 Write completed with error (sct=0, sc=8) 00:09:27.042 [2024-04-18 17:25:53.476262] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:09:27.042 17:25:53 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:27.042 17:25:53 -- target/delete_subsystem.sh@35 -- # kill -0 1955425 00:09:27.042 17:25:53 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:27.042 [2024-04-18 17:25:53.492696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:09:27.042 [2024-04-18 17:25:53.492725] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:09:27.042 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:27.042 Initializing NVMe Controllers 00:09:27.042 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:27.042 Controller IO queue size 128, less than required. 00:09:27.042 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:27.042 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:27.042 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:27.042 Initialization complete. Launching workers. 00:09:27.042 ======================================================== 00:09:27.042 Latency(us) 00:09:27.042 Device Information : IOPS MiB/s Average min max 00:09:27.042 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.53 0.04 1594414.51 1001083.39 2973896.76 00:09:27.042 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.53 0.04 1592968.23 1000129.63 2973542.99 00:09:27.042 ======================================================== 00:09:27.042 Total : 161.07 0.08 1593691.37 1000129.63 2973896.76 00:09:27.042 00:09:27.611 17:25:53 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:27.611 17:25:53 -- target/delete_subsystem.sh@35 -- # kill -0 1955425 00:09:27.611 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1955425) - No such process 00:09:27.611 17:25:53 -- target/delete_subsystem.sh@45 -- # NOT wait 1955425 00:09:27.611 17:25:53 -- common/autotest_common.sh@638 -- # local es=0 00:09:27.611 17:25:53 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 1955425 00:09:27.611 17:25:53 -- common/autotest_common.sh@626 -- # local arg=wait 00:09:27.611 17:25:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:27.611 17:25:53 -- common/autotest_common.sh@630 -- # type -t wait 00:09:27.611 17:25:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:27.611 17:25:53 -- common/autotest_common.sh@641 -- # wait 1955425 00:09:27.611 17:25:53 -- common/autotest_common.sh@641 -- # es=1 00:09:27.611 17:25:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:27.611 17:25:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:27.611 17:25:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:27.611 17:25:53 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:27.611 17:25:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:27.611 17:25:53 -- common/autotest_common.sh@10 -- # set +x 00:09:27.611 17:25:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:27.611 17:25:53 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:27.611 17:25:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:27.611 17:25:53 -- common/autotest_common.sh@10 -- # set +x 00:09:27.611 [2024-04-18 17:25:54.000549] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:27.611 17:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:27.611 17:25:54 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.611 17:25:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:27.611 17:25:54 -- common/autotest_common.sh@10 -- # set +x 00:09:27.611 17:25:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:27.611 17:25:54 -- target/delete_subsystem.sh@54 -- # perf_pid=1955956 00:09:27.611 17:25:54 -- target/delete_subsystem.sh@56 -- # delay=0 00:09:27.611 17:25:54 -- target/delete_subsystem.sh@57 -- # kill -0 1955956 00:09:27.611 17:25:54 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:27.611 17:25:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:27.611 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.611 [2024-04-18 17:25:54.082430] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:28.179 17:25:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:28.179 17:25:54 -- target/delete_subsystem.sh@57 -- # kill -0 1955956 00:09:28.179 17:25:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:28.747 17:25:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:28.747 17:25:55 -- target/delete_subsystem.sh@57 -- # kill -0 1955956 00:09:28.747 17:25:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:29.007 17:25:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:29.007 17:25:55 -- target/delete_subsystem.sh@57 -- # kill -0 1955956 00:09:29.007 17:25:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:29.575 17:25:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:29.575 17:25:56 -- target/delete_subsystem.sh@57 -- # kill -0 1955956 00:09:29.575 17:25:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:30.143 17:25:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:30.143 17:25:56 -- target/delete_subsystem.sh@57 -- # kill -0 1955956 00:09:30.143 17:25:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:30.709 17:25:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:30.709 17:25:57 -- target/delete_subsystem.sh@57 -- # kill -0 1955956 00:09:30.709 17:25:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.278 17:25:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:31.278 17:25:57 -- target/delete_subsystem.sh@57 -- # kill -0 1955956 00:09:31.278 17:25:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.537 17:25:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:31.537 17:25:58 -- target/delete_subsystem.sh@57 -- # kill -0 1955956 00:09:31.537 17:25:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:32.105 17:25:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:32.105 17:25:58 -- target/delete_subsystem.sh@57 -- # kill -0 1955956 00:09:32.105 17:25:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:32.674 17:25:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:32.674 17:25:59 -- target/delete_subsystem.sh@57 -- # kill -0 1955956 00:09:32.674 17:25:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:33.242 17:25:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:33.242 17:25:59 -- target/delete_subsystem.sh@57 -- # kill -0 1955956 00:09:33.242 17:25:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:33.809 17:26:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:33.809 17:26:00 -- target/delete_subsystem.sh@57 -- # kill -0 1955956 00:09:33.809 17:26:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:34.067 17:26:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:34.067 17:26:00 -- target/delete_subsystem.sh@57 -- # kill -0 1955956 00:09:34.067 17:26:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:34.635 17:26:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:34.635 17:26:01 -- target/delete_subsystem.sh@57 -- # kill -0 1955956 00:09:34.635 17:26:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:34.894 Initializing NVMe Controllers 00:09:34.894 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:34.894 Controller IO queue size 128, less than required. 00:09:34.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:34.894 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:34.894 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:34.894 Initialization complete. Launching workers. 00:09:34.894 ======================================================== 00:09:34.894 Latency(us) 00:09:34.894 Device Information : IOPS MiB/s Average min max 00:09:34.894 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001785.63 1000066.44 1004711.57 00:09:34.894 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003024.00 1000194.00 1007327.02 00:09:34.894 ======================================================== 00:09:34.894 Total : 256.00 0.12 1002404.82 1000066.44 1007327.02 00:09:34.894 00:09:35.155 17:26:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:35.155 17:26:01 -- target/delete_subsystem.sh@57 -- # kill -0 1955956 00:09:35.155 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1955956) - No such process 00:09:35.155 17:26:01 -- target/delete_subsystem.sh@67 -- # wait 1955956 00:09:35.155 17:26:01 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:35.155 17:26:01 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:35.155 17:26:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:35.155 17:26:01 -- nvmf/common.sh@117 -- # sync 00:09:35.155 17:26:01 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:35.155 17:26:01 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:35.155 17:26:01 -- nvmf/common.sh@120 -- # set +e 00:09:35.155 17:26:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:35.155 17:26:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:35.155 rmmod nvme_rdma 00:09:35.155 rmmod nvme_fabrics 00:09:35.155 17:26:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:35.155 17:26:01 -- nvmf/common.sh@124 -- # set -e 00:09:35.155 17:26:01 -- nvmf/common.sh@125 -- # return 0 00:09:35.155 17:26:01 -- nvmf/common.sh@478 -- # '[' -n 1955390 ']' 00:09:35.155 17:26:01 -- nvmf/common.sh@479 -- # killprocess 1955390 00:09:35.155 17:26:01 -- common/autotest_common.sh@936 -- # '[' -z 1955390 ']' 00:09:35.155 17:26:01 -- common/autotest_common.sh@940 -- # kill -0 1955390 00:09:35.155 17:26:01 -- common/autotest_common.sh@941 -- # uname 00:09:35.155 17:26:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:35.155 17:26:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1955390 00:09:35.155 17:26:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:35.155 17:26:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:35.155 17:26:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1955390' 00:09:35.155 killing process with pid 1955390 00:09:35.155 17:26:01 -- common/autotest_common.sh@955 -- # kill 1955390 00:09:35.155 17:26:01 -- common/autotest_common.sh@960 -- # wait 1955390 00:09:35.724 17:26:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:35.724 17:26:01 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:35.724 00:09:35.724 real 0m15.304s 00:09:35.724 user 0m47.762s 00:09:35.724 sys 0m2.542s 00:09:35.724 17:26:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:35.724 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:09:35.724 ************************************ 00:09:35.724 END TEST nvmf_delete_subsystem 00:09:35.724 ************************************ 00:09:35.724 17:26:01 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:09:35.724 17:26:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:35.724 17:26:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:35.724 17:26:01 -- common/autotest_common.sh@10 -- # set +x 00:09:35.724 ************************************ 00:09:35.724 START TEST nvmf_ns_masking 00:09:35.724 ************************************ 00:09:35.724 17:26:02 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:09:35.724 * Looking for test storage... 00:09:35.724 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:35.724 17:26:02 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.724 17:26:02 -- nvmf/common.sh@7 -- # uname -s 00:09:35.724 17:26:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.724 17:26:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.724 17:26:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.724 17:26:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.724 17:26:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.724 17:26:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.724 17:26:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.724 17:26:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.724 17:26:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.724 17:26:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.724 17:26:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:35.724 17:26:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:35.724 17:26:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.724 17:26:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.724 17:26:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.724 17:26:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.724 17:26:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:35.724 17:26:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.724 17:26:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.724 17:26:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.724 17:26:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.724 17:26:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.724 17:26:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.724 17:26:02 -- paths/export.sh@5 -- # export PATH 00:09:35.724 17:26:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.724 17:26:02 -- nvmf/common.sh@47 -- # : 0 00:09:35.724 17:26:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:35.724 17:26:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:35.724 17:26:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.724 17:26:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.725 17:26:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.725 17:26:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:35.725 17:26:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:35.725 17:26:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:35.725 17:26:02 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:35.725 17:26:02 -- target/ns_masking.sh@11 -- # loops=5 00:09:35.725 17:26:02 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:35.725 17:26:02 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:09:35.725 17:26:02 -- target/ns_masking.sh@15 -- # uuidgen 00:09:35.725 17:26:02 -- target/ns_masking.sh@15 -- # HOSTID=65c5edba-69bc-49d6-a033-20c43c9e074b 00:09:35.725 17:26:02 -- target/ns_masking.sh@44 -- # nvmftestinit 00:09:35.725 17:26:02 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:35.725 17:26:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.725 17:26:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:35.725 17:26:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:35.725 17:26:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:35.725 17:26:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.725 17:26:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.725 17:26:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.725 17:26:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:35.725 17:26:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:35.725 17:26:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:35.725 17:26:02 -- common/autotest_common.sh@10 -- # set +x 00:09:37.670 17:26:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:37.670 17:26:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:37.670 17:26:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:37.670 17:26:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:37.670 17:26:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:37.670 17:26:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:37.670 17:26:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:37.670 17:26:04 -- nvmf/common.sh@295 -- # net_devs=() 00:09:37.670 17:26:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:37.670 17:26:04 -- nvmf/common.sh@296 -- # e810=() 00:09:37.670 17:26:04 -- nvmf/common.sh@296 -- # local -ga e810 00:09:37.670 17:26:04 -- nvmf/common.sh@297 -- # x722=() 00:09:37.670 17:26:04 -- nvmf/common.sh@297 -- # local -ga x722 00:09:37.670 17:26:04 -- nvmf/common.sh@298 -- # mlx=() 00:09:37.670 17:26:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:37.670 17:26:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.670 17:26:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.670 17:26:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.670 17:26:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.670 17:26:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.670 17:26:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.670 17:26:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.670 17:26:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.670 17:26:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.670 17:26:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.670 17:26:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.670 17:26:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:37.670 17:26:04 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:37.670 17:26:04 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:37.670 17:26:04 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:37.670 17:26:04 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:37.670 17:26:04 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:37.670 17:26:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:37.670 17:26:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:37.670 17:26:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:09:37.670 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:09:37.670 17:26:04 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:37.670 17:26:04 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:37.670 17:26:04 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:37.670 17:26:04 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:37.670 17:26:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:37.670 17:26:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:09:37.670 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:09:37.670 17:26:04 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:37.670 17:26:04 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:37.670 17:26:04 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:37.670 17:26:04 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:37.670 17:26:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:37.670 17:26:04 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:37.670 17:26:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:37.670 17:26:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.670 17:26:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:37.670 17:26:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.670 17:26:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:09:37.670 Found net devices under 0000:09:00.0: mlx_0_0 00:09:37.670 17:26:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.670 17:26:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:37.670 17:26:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.670 17:26:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:37.670 17:26:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.670 17:26:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:09:37.670 Found net devices under 0000:09:00.1: mlx_0_1 00:09:37.670 17:26:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.670 17:26:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:37.670 17:26:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:37.670 17:26:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:37.670 17:26:04 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:37.670 17:26:04 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:37.670 17:26:04 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:37.670 17:26:04 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:37.670 17:26:04 -- nvmf/common.sh@58 -- # uname 00:09:37.670 17:26:04 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:37.670 17:26:04 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:37.670 17:26:04 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:37.670 17:26:04 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:37.670 17:26:04 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:37.670 17:26:04 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:37.949 17:26:04 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:37.949 17:26:04 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:37.949 17:26:04 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:37.949 17:26:04 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:37.949 17:26:04 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:37.949 17:26:04 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:37.949 17:26:04 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:37.949 17:26:04 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:37.949 17:26:04 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:37.949 17:26:04 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:37.949 17:26:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:37.949 17:26:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.949 17:26:04 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:37.949 17:26:04 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:37.949 17:26:04 -- nvmf/common.sh@105 -- # continue 2 00:09:37.949 17:26:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:37.949 17:26:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.949 17:26:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:37.949 17:26:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.949 17:26:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:37.949 17:26:04 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:37.949 17:26:04 -- nvmf/common.sh@105 -- # continue 2 00:09:37.949 17:26:04 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:37.949 17:26:04 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:37.949 17:26:04 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:37.949 17:26:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:37.949 17:26:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:37.949 17:26:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:37.949 17:26:04 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:37.949 17:26:04 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:37.949 17:26:04 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:37.949 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:37.949 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:09:37.949 altname enp9s0f0np0 00:09:37.949 inet 192.168.100.8/24 scope global mlx_0_0 00:09:37.949 valid_lft forever preferred_lft forever 00:09:37.949 17:26:04 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:37.949 17:26:04 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:37.949 17:26:04 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:37.949 17:26:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:37.949 17:26:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:37.949 17:26:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:37.949 17:26:04 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:37.949 17:26:04 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:37.949 17:26:04 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:37.949 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:37.949 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:09:37.949 altname enp9s0f1np1 00:09:37.949 inet 192.168.100.9/24 scope global mlx_0_1 00:09:37.949 valid_lft forever preferred_lft forever 00:09:37.949 17:26:04 -- nvmf/common.sh@411 -- # return 0 00:09:37.949 17:26:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:37.949 17:26:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:37.949 17:26:04 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:37.949 17:26:04 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:37.949 17:26:04 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:37.949 17:26:04 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:37.949 17:26:04 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:37.949 17:26:04 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:37.949 17:26:04 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:37.949 17:26:04 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:37.949 17:26:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:37.949 17:26:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.950 17:26:04 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:37.950 17:26:04 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:37.950 17:26:04 -- nvmf/common.sh@105 -- # continue 2 00:09:37.950 17:26:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:37.950 17:26:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.950 17:26:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:37.950 17:26:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:37.950 17:26:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:37.950 17:26:04 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:37.950 17:26:04 -- nvmf/common.sh@105 -- # continue 2 00:09:37.950 17:26:04 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:37.950 17:26:04 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:37.950 17:26:04 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:37.950 17:26:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:37.950 17:26:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:37.950 17:26:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:37.950 17:26:04 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:37.950 17:26:04 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:37.950 17:26:04 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:37.950 17:26:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:37.950 17:26:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:37.950 17:26:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:37.950 17:26:04 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:37.950 192.168.100.9' 00:09:37.950 17:26:04 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:37.950 192.168.100.9' 00:09:37.950 17:26:04 -- nvmf/common.sh@446 -- # head -n 1 00:09:37.950 17:26:04 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:37.950 17:26:04 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:37.950 192.168.100.9' 00:09:37.950 17:26:04 -- nvmf/common.sh@447 -- # tail -n +2 00:09:37.950 17:26:04 -- nvmf/common.sh@447 -- # head -n 1 00:09:37.950 17:26:04 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:37.950 17:26:04 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:37.950 17:26:04 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:37.950 17:26:04 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:37.950 17:26:04 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:37.950 17:26:04 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:37.950 17:26:04 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:09:37.950 17:26:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:37.950 17:26:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:37.950 17:26:04 -- common/autotest_common.sh@10 -- # set +x 00:09:37.950 17:26:04 -- nvmf/common.sh@470 -- # nvmfpid=1958582 00:09:37.950 17:26:04 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:37.950 17:26:04 -- nvmf/common.sh@471 -- # waitforlisten 1958582 00:09:37.950 17:26:04 -- common/autotest_common.sh@817 -- # '[' -z 1958582 ']' 00:09:37.950 17:26:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.950 17:26:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:37.950 17:26:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.950 17:26:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:37.950 17:26:04 -- common/autotest_common.sh@10 -- # set +x 00:09:37.950 [2024-04-18 17:26:04.323322] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:09:37.950 [2024-04-18 17:26:04.323464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.950 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.950 [2024-04-18 17:26:04.388314] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.209 [2024-04-18 17:26:04.504882] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.209 [2024-04-18 17:26:04.504946] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.209 [2024-04-18 17:26:04.504970] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.209 [2024-04-18 17:26:04.504984] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.209 [2024-04-18 17:26:04.504996] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.209 [2024-04-18 17:26:04.505089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.209 [2024-04-18 17:26:04.505161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.209 [2024-04-18 17:26:04.505254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.209 [2024-04-18 17:26:04.505257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.775 17:26:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:38.775 17:26:05 -- common/autotest_common.sh@850 -- # return 0 00:09:38.775 17:26:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:38.775 17:26:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:38.775 17:26:05 -- common/autotest_common.sh@10 -- # set +x 00:09:38.775 17:26:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.775 17:26:05 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:39.034 [2024-04-18 17:26:05.532971] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf91b60/0xf96050) succeed. 00:09:39.034 [2024-04-18 17:26:05.543592] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf93150/0xfd76e0) succeed. 00:09:39.293 17:26:05 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:09:39.293 17:26:05 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:09:39.293 17:26:05 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:39.551 Malloc1 00:09:39.551 17:26:05 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:39.809 Malloc2 00:09:39.809 17:26:06 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:40.067 17:26:06 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:40.325 17:26:06 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:40.584 [2024-04-18 17:26:06.971149] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:40.584 17:26:06 -- target/ns_masking.sh@61 -- # connect 00:09:40.584 17:26:06 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 65c5edba-69bc-49d6-a033-20c43c9e074b -a 192.168.100.8 -s 4420 -i 4 00:09:41.523 17:26:07 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:09:41.523 17:26:07 -- common/autotest_common.sh@1184 -- # local i=0 00:09:41.523 17:26:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.523 17:26:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:41.523 17:26:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:43.431 17:26:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:43.431 17:26:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:43.431 17:26:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:43.689 17:26:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:43.689 17:26:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:43.689 17:26:09 -- common/autotest_common.sh@1194 -- # return 0 00:09:43.689 17:26:09 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:43.689 17:26:09 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:43.689 17:26:10 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:43.689 17:26:10 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:43.689 17:26:10 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:09:43.689 17:26:10 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:43.689 17:26:10 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:43.689 [ 0]:0x1 00:09:43.689 17:26:10 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:43.689 17:26:10 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:43.689 17:26:10 -- target/ns_masking.sh@40 -- # nguid=460f11ae448c44199a707c2c06585fed 00:09:43.689 17:26:10 -- target/ns_masking.sh@41 -- # [[ 460f11ae448c44199a707c2c06585fed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:43.689 17:26:10 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:43.946 17:26:10 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:09:43.946 17:26:10 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:43.946 17:26:10 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:43.946 [ 0]:0x1 00:09:43.946 17:26:10 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:43.947 17:26:10 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:43.947 17:26:10 -- target/ns_masking.sh@40 -- # nguid=460f11ae448c44199a707c2c06585fed 00:09:43.947 17:26:10 -- target/ns_masking.sh@41 -- # [[ 460f11ae448c44199a707c2c06585fed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:43.947 17:26:10 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:09:43.947 17:26:10 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:43.947 17:26:10 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:43.947 [ 1]:0x2 00:09:43.947 17:26:10 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:43.947 17:26:10 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:43.947 17:26:10 -- target/ns_masking.sh@40 -- # nguid=148e176241db44c2be15e8fd58dcccbd 00:09:43.947 17:26:10 -- target/ns_masking.sh@41 -- # [[ 148e176241db44c2be15e8fd58dcccbd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:43.947 17:26:10 -- target/ns_masking.sh@69 -- # disconnect 00:09:43.947 17:26:10 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:44.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.880 17:26:11 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.880 17:26:11 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:45.138 17:26:11 -- target/ns_masking.sh@77 -- # connect 1 00:09:45.138 17:26:11 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 65c5edba-69bc-49d6-a033-20c43c9e074b -a 192.168.100.8 -s 4420 -i 4 00:09:46.517 17:26:12 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:46.517 17:26:12 -- common/autotest_common.sh@1184 -- # local i=0 00:09:46.517 17:26:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:46.517 17:26:12 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:09:46.517 17:26:12 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:09:46.517 17:26:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:48.425 17:26:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:48.425 17:26:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:48.425 17:26:14 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.425 17:26:14 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:48.425 17:26:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.425 17:26:14 -- common/autotest_common.sh@1194 -- # return 0 00:09:48.425 17:26:14 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:48.425 17:26:14 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:48.425 17:26:14 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:48.425 17:26:14 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:48.425 17:26:14 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:09:48.425 17:26:14 -- common/autotest_common.sh@638 -- # local es=0 00:09:48.425 17:26:14 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:48.425 17:26:14 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:48.425 17:26:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:48.425 17:26:14 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:48.425 17:26:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:48.425 17:26:14 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:48.425 17:26:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:48.425 17:26:14 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:48.425 17:26:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:48.425 17:26:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:48.425 17:26:14 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:48.425 17:26:14 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:48.426 17:26:14 -- common/autotest_common.sh@641 -- # es=1 00:09:48.426 17:26:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:48.426 17:26:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:48.426 17:26:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:48.426 17:26:14 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:09:48.426 17:26:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:48.426 17:26:14 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:48.426 [ 0]:0x2 00:09:48.426 17:26:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:48.426 17:26:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:48.426 17:26:14 -- target/ns_masking.sh@40 -- # nguid=148e176241db44c2be15e8fd58dcccbd 00:09:48.426 17:26:14 -- target/ns_masking.sh@41 -- # [[ 148e176241db44c2be15e8fd58dcccbd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:48.426 17:26:14 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:48.683 17:26:15 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:09:48.683 17:26:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:48.683 17:26:15 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:48.683 [ 0]:0x1 00:09:48.683 17:26:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:48.683 17:26:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:48.683 17:26:15 -- target/ns_masking.sh@40 -- # nguid=460f11ae448c44199a707c2c06585fed 00:09:48.683 17:26:15 -- target/ns_masking.sh@41 -- # [[ 460f11ae448c44199a707c2c06585fed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:48.683 17:26:15 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:09:48.683 17:26:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:48.683 17:26:15 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:48.683 [ 1]:0x2 00:09:48.683 17:26:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:48.683 17:26:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:48.683 17:26:15 -- target/ns_masking.sh@40 -- # nguid=148e176241db44c2be15e8fd58dcccbd 00:09:48.683 17:26:15 -- target/ns_masking.sh@41 -- # [[ 148e176241db44c2be15e8fd58dcccbd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:48.684 17:26:15 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:48.942 17:26:15 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:09:48.942 17:26:15 -- common/autotest_common.sh@638 -- # local es=0 00:09:48.942 17:26:15 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:48.942 17:26:15 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:48.942 17:26:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:48.942 17:26:15 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:48.942 17:26:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:48.942 17:26:15 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:48.942 17:26:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:48.942 17:26:15 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:48.942 17:26:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:48.942 17:26:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:48.942 17:26:15 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:48.942 17:26:15 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:48.942 17:26:15 -- common/autotest_common.sh@641 -- # es=1 00:09:48.942 17:26:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:48.942 17:26:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:48.942 17:26:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:48.942 17:26:15 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:09:48.942 17:26:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:48.942 17:26:15 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:48.942 [ 0]:0x2 00:09:48.942 17:26:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:48.942 17:26:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:49.201 17:26:15 -- target/ns_masking.sh@40 -- # nguid=148e176241db44c2be15e8fd58dcccbd 00:09:49.201 17:26:15 -- target/ns_masking.sh@41 -- # [[ 148e176241db44c2be15e8fd58dcccbd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:49.201 17:26:15 -- target/ns_masking.sh@91 -- # disconnect 00:09:49.201 17:26:15 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:49.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.769 17:26:16 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:50.030 17:26:16 -- target/ns_masking.sh@95 -- # connect 2 00:09:50.030 17:26:16 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 65c5edba-69bc-49d6-a033-20c43c9e074b -a 192.168.100.8 -s 4420 -i 4 00:09:50.965 17:26:17 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:50.965 17:26:17 -- common/autotest_common.sh@1184 -- # local i=0 00:09:50.965 17:26:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:50.965 17:26:17 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:09:50.965 17:26:17 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:09:50.965 17:26:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:52.872 17:26:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:52.872 17:26:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:52.872 17:26:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:53.131 17:26:19 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:09:53.131 17:26:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:53.131 17:26:19 -- common/autotest_common.sh@1194 -- # return 0 00:09:53.131 17:26:19 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:53.131 17:26:19 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:53.131 17:26:19 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:53.131 17:26:19 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:53.131 17:26:19 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:09:53.131 17:26:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:53.131 17:26:19 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:53.131 [ 0]:0x1 00:09:53.131 17:26:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:53.131 17:26:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:53.131 17:26:19 -- target/ns_masking.sh@40 -- # nguid=460f11ae448c44199a707c2c06585fed 00:09:53.131 17:26:19 -- target/ns_masking.sh@41 -- # [[ 460f11ae448c44199a707c2c06585fed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:53.131 17:26:19 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:09:53.131 17:26:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:53.131 17:26:19 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:53.132 [ 1]:0x2 00:09:53.132 17:26:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:53.132 17:26:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:53.132 17:26:19 -- target/ns_masking.sh@40 -- # nguid=148e176241db44c2be15e8fd58dcccbd 00:09:53.132 17:26:19 -- target/ns_masking.sh@41 -- # [[ 148e176241db44c2be15e8fd58dcccbd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:53.132 17:26:19 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:53.390 17:26:19 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:09:53.390 17:26:19 -- common/autotest_common.sh@638 -- # local es=0 00:09:53.390 17:26:19 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:53.390 17:26:19 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:53.390 17:26:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.390 17:26:19 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:53.390 17:26:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.390 17:26:19 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:53.390 17:26:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:53.390 17:26:19 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:53.390 17:26:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:53.390 17:26:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:53.390 17:26:19 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:53.390 17:26:19 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:53.390 17:26:19 -- common/autotest_common.sh@641 -- # es=1 00:09:53.390 17:26:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:53.390 17:26:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:53.390 17:26:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:53.390 17:26:19 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:09:53.390 17:26:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:53.390 17:26:19 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:53.390 [ 0]:0x2 00:09:53.390 17:26:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:53.390 17:26:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:53.390 17:26:19 -- target/ns_masking.sh@40 -- # nguid=148e176241db44c2be15e8fd58dcccbd 00:09:53.390 17:26:19 -- target/ns_masking.sh@41 -- # [[ 148e176241db44c2be15e8fd58dcccbd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:53.390 17:26:19 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:53.390 17:26:19 -- common/autotest_common.sh@638 -- # local es=0 00:09:53.390 17:26:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:53.390 17:26:19 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:53.390 17:26:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.390 17:26:19 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:53.390 17:26:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.390 17:26:19 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:53.390 17:26:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.390 17:26:19 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:53.390 17:26:19 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:53.390 17:26:19 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:53.648 [2024-04-18 17:26:20.092949] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:53.648 request: 00:09:53.648 { 00:09:53.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.648 "nsid": 2, 00:09:53.648 "host": "nqn.2016-06.io.spdk:host1", 00:09:53.648 "method": "nvmf_ns_remove_host", 00:09:53.648 "req_id": 1 00:09:53.648 } 00:09:53.648 Got JSON-RPC error response 00:09:53.648 response: 00:09:53.648 { 00:09:53.648 "code": -32602, 00:09:53.648 "message": "Invalid parameters" 00:09:53.648 } 00:09:53.648 17:26:20 -- common/autotest_common.sh@641 -- # es=1 00:09:53.648 17:26:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:53.648 17:26:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:53.648 17:26:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:53.648 17:26:20 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:09:53.648 17:26:20 -- common/autotest_common.sh@638 -- # local es=0 00:09:53.649 17:26:20 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:53.649 17:26:20 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:53.649 17:26:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.649 17:26:20 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:53.649 17:26:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.649 17:26:20 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:53.649 17:26:20 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:53.649 17:26:20 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:53.649 17:26:20 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:53.649 17:26:20 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:53.649 17:26:20 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:53.649 17:26:20 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:53.649 17:26:20 -- common/autotest_common.sh@641 -- # es=1 00:09:53.649 17:26:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:53.649 17:26:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:53.649 17:26:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:53.649 17:26:20 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:09:53.649 17:26:20 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:53.649 17:26:20 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:53.649 [ 0]:0x2 00:09:53.649 17:26:20 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:53.649 17:26:20 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:53.908 17:26:20 -- target/ns_masking.sh@40 -- # nguid=148e176241db44c2be15e8fd58dcccbd 00:09:53.908 17:26:20 -- target/ns_masking.sh@41 -- # [[ 148e176241db44c2be15e8fd58dcccbd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:53.908 17:26:20 -- target/ns_masking.sh@108 -- # disconnect 00:09:53.908 17:26:20 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:54.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.476 17:26:20 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:54.736 17:26:21 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:09:54.736 17:26:21 -- target/ns_masking.sh@114 -- # nvmftestfini 00:09:54.736 17:26:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:54.736 17:26:21 -- nvmf/common.sh@117 -- # sync 00:09:54.736 17:26:21 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:54.736 17:26:21 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:54.736 17:26:21 -- nvmf/common.sh@120 -- # set +e 00:09:54.736 17:26:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:54.736 17:26:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:54.736 rmmod nvme_rdma 00:09:54.736 rmmod nvme_fabrics 00:09:54.736 17:26:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:54.736 17:26:21 -- nvmf/common.sh@124 -- # set -e 00:09:54.736 17:26:21 -- nvmf/common.sh@125 -- # return 0 00:09:54.736 17:26:21 -- nvmf/common.sh@478 -- # '[' -n 1958582 ']' 00:09:54.736 17:26:21 -- nvmf/common.sh@479 -- # killprocess 1958582 00:09:54.736 17:26:21 -- common/autotest_common.sh@936 -- # '[' -z 1958582 ']' 00:09:54.736 17:26:21 -- common/autotest_common.sh@940 -- # kill -0 1958582 00:09:54.736 17:26:21 -- common/autotest_common.sh@941 -- # uname 00:09:54.736 17:26:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:54.736 17:26:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1958582 00:09:54.736 17:26:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:54.736 17:26:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:54.736 17:26:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1958582' 00:09:54.736 killing process with pid 1958582 00:09:54.737 17:26:21 -- common/autotest_common.sh@955 -- # kill 1958582 00:09:54.737 17:26:21 -- common/autotest_common.sh@960 -- # wait 1958582 00:09:55.360 17:26:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:55.360 17:26:21 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:55.360 00:09:55.360 real 0m19.548s 00:09:55.360 user 1m12.656s 00:09:55.360 sys 0m2.862s 00:09:55.360 17:26:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:55.360 17:26:21 -- common/autotest_common.sh@10 -- # set +x 00:09:55.360 ************************************ 00:09:55.360 END TEST nvmf_ns_masking 00:09:55.360 ************************************ 00:09:55.360 17:26:21 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:55.360 17:26:21 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:09:55.360 17:26:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:55.360 17:26:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.360 17:26:21 -- common/autotest_common.sh@10 -- # set +x 00:09:55.360 ************************************ 00:09:55.360 START TEST nvmf_nvme_cli 00:09:55.360 ************************************ 00:09:55.360 17:26:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:09:55.360 * Looking for test storage... 00:09:55.360 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:55.360 17:26:21 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.360 17:26:21 -- nvmf/common.sh@7 -- # uname -s 00:09:55.360 17:26:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.360 17:26:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.360 17:26:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.360 17:26:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.360 17:26:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.360 17:26:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.360 17:26:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.360 17:26:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.360 17:26:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.360 17:26:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.360 17:26:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:55.360 17:26:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:55.360 17:26:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.360 17:26:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.360 17:26:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.360 17:26:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.360 17:26:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:55.360 17:26:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.361 17:26:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.361 17:26:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.361 17:26:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.361 17:26:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.361 17:26:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.361 17:26:21 -- paths/export.sh@5 -- # export PATH 00:09:55.361 17:26:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.361 17:26:21 -- nvmf/common.sh@47 -- # : 0 00:09:55.361 17:26:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:55.361 17:26:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:55.361 17:26:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.361 17:26:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.361 17:26:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.361 17:26:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:55.361 17:26:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:55.361 17:26:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:55.361 17:26:21 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.361 17:26:21 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.361 17:26:21 -- target/nvme_cli.sh@14 -- # devs=() 00:09:55.361 17:26:21 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:55.361 17:26:21 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:55.361 17:26:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.361 17:26:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:55.361 17:26:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:55.361 17:26:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:55.361 17:26:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.361 17:26:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:55.361 17:26:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.361 17:26:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:55.361 17:26:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:55.361 17:26:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:55.361 17:26:21 -- common/autotest_common.sh@10 -- # set +x 00:09:57.266 17:26:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:57.266 17:26:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:57.266 17:26:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:57.266 17:26:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:57.266 17:26:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:57.266 17:26:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:57.266 17:26:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:57.266 17:26:23 -- nvmf/common.sh@295 -- # net_devs=() 00:09:57.266 17:26:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:57.266 17:26:23 -- nvmf/common.sh@296 -- # e810=() 00:09:57.266 17:26:23 -- nvmf/common.sh@296 -- # local -ga e810 00:09:57.266 17:26:23 -- nvmf/common.sh@297 -- # x722=() 00:09:57.266 17:26:23 -- nvmf/common.sh@297 -- # local -ga x722 00:09:57.266 17:26:23 -- nvmf/common.sh@298 -- # mlx=() 00:09:57.266 17:26:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:57.266 17:26:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.266 17:26:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.267 17:26:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.267 17:26:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.267 17:26:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.267 17:26:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.267 17:26:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.267 17:26:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.267 17:26:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.267 17:26:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.267 17:26:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.267 17:26:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:57.267 17:26:23 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:57.267 17:26:23 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:57.267 17:26:23 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:57.267 17:26:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:57.267 17:26:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:57.267 17:26:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:09:57.267 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:09:57.267 17:26:23 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:57.267 17:26:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:57.267 17:26:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:09:57.267 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:09:57.267 17:26:23 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:57.267 17:26:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:57.267 17:26:23 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:57.267 17:26:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.267 17:26:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:57.267 17:26:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.267 17:26:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:09:57.267 Found net devices under 0000:09:00.0: mlx_0_0 00:09:57.267 17:26:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.267 17:26:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:57.267 17:26:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.267 17:26:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:57.267 17:26:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.267 17:26:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:09:57.267 Found net devices under 0000:09:00.1: mlx_0_1 00:09:57.267 17:26:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.267 17:26:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:57.267 17:26:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:57.267 17:26:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:57.267 17:26:23 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:57.267 17:26:23 -- nvmf/common.sh@58 -- # uname 00:09:57.267 17:26:23 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:57.267 17:26:23 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:57.267 17:26:23 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:57.267 17:26:23 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:57.267 17:26:23 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:57.267 17:26:23 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:57.267 17:26:23 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:57.267 17:26:23 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:57.267 17:26:23 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:57.267 17:26:23 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:57.267 17:26:23 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:57.267 17:26:23 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:57.267 17:26:23 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:57.267 17:26:23 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:57.267 17:26:23 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:57.267 17:26:23 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:57.267 17:26:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:57.267 17:26:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.267 17:26:23 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:57.267 17:26:23 -- nvmf/common.sh@105 -- # continue 2 00:09:57.267 17:26:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:57.267 17:26:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.267 17:26:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.267 17:26:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:57.267 17:26:23 -- nvmf/common.sh@105 -- # continue 2 00:09:57.267 17:26:23 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:57.267 17:26:23 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:57.267 17:26:23 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:57.267 17:26:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:57.267 17:26:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:57.267 17:26:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:57.267 17:26:23 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:57.267 17:26:23 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:57.267 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:57.267 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:09:57.267 altname enp9s0f0np0 00:09:57.267 inet 192.168.100.8/24 scope global mlx_0_0 00:09:57.267 valid_lft forever preferred_lft forever 00:09:57.267 17:26:23 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:57.267 17:26:23 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:57.267 17:26:23 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:57.267 17:26:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:57.267 17:26:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:57.267 17:26:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:57.267 17:26:23 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:57.267 17:26:23 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:57.267 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:57.267 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:09:57.267 altname enp9s0f1np1 00:09:57.267 inet 192.168.100.9/24 scope global mlx_0_1 00:09:57.267 valid_lft forever preferred_lft forever 00:09:57.267 17:26:23 -- nvmf/common.sh@411 -- # return 0 00:09:57.267 17:26:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:57.267 17:26:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:57.267 17:26:23 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:57.267 17:26:23 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:57.267 17:26:23 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:57.267 17:26:23 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:57.267 17:26:23 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:57.267 17:26:23 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:57.267 17:26:23 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:57.267 17:26:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:57.267 17:26:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.267 17:26:23 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:57.267 17:26:23 -- nvmf/common.sh@105 -- # continue 2 00:09:57.267 17:26:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:57.267 17:26:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.267 17:26:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.267 17:26:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:57.267 17:26:23 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:57.267 17:26:23 -- nvmf/common.sh@105 -- # continue 2 00:09:57.267 17:26:23 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:57.267 17:26:23 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:57.267 17:26:23 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:57.267 17:26:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:57.267 17:26:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:57.267 17:26:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:57.267 17:26:23 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:57.267 17:26:23 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:57.267 17:26:23 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:57.267 17:26:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:57.267 17:26:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:57.268 17:26:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:57.268 17:26:23 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:57.268 192.168.100.9' 00:09:57.268 17:26:23 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:57.268 192.168.100.9' 00:09:57.268 17:26:23 -- nvmf/common.sh@446 -- # head -n 1 00:09:57.268 17:26:23 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:57.268 17:26:23 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:57.268 192.168.100.9' 00:09:57.268 17:26:23 -- nvmf/common.sh@447 -- # tail -n +2 00:09:57.268 17:26:23 -- nvmf/common.sh@447 -- # head -n 1 00:09:57.268 17:26:23 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:57.268 17:26:23 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:57.268 17:26:23 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:57.268 17:26:23 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:57.268 17:26:23 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:57.268 17:26:23 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:57.268 17:26:23 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:57.268 17:26:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:57.268 17:26:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:57.268 17:26:23 -- common/autotest_common.sh@10 -- # set +x 00:09:57.268 17:26:23 -- nvmf/common.sh@470 -- # nvmfpid=1962540 00:09:57.268 17:26:23 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:57.268 17:26:23 -- nvmf/common.sh@471 -- # waitforlisten 1962540 00:09:57.268 17:26:23 -- common/autotest_common.sh@817 -- # '[' -z 1962540 ']' 00:09:57.268 17:26:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.268 17:26:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:57.268 17:26:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.268 17:26:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:57.268 17:26:23 -- common/autotest_common.sh@10 -- # set +x 00:09:57.528 [2024-04-18 17:26:23.841074] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:09:57.528 [2024-04-18 17:26:23.841153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.528 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.528 [2024-04-18 17:26:23.900499] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.528 [2024-04-18 17:26:24.011425] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.528 [2024-04-18 17:26:24.011484] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.528 [2024-04-18 17:26:24.011498] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.528 [2024-04-18 17:26:24.011510] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.528 [2024-04-18 17:26:24.011520] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.528 [2024-04-18 17:26:24.011575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.528 [2024-04-18 17:26:24.011637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.528 [2024-04-18 17:26:24.011689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.528 [2024-04-18 17:26:24.011692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.787 17:26:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:57.787 17:26:24 -- common/autotest_common.sh@850 -- # return 0 00:09:57.787 17:26:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:57.787 17:26:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:57.787 17:26:24 -- common/autotest_common.sh@10 -- # set +x 00:09:57.787 17:26:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.787 17:26:24 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:57.787 17:26:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:57.787 17:26:24 -- common/autotest_common.sh@10 -- # set +x 00:09:57.787 [2024-04-18 17:26:24.191468] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cb8b60/0x1cbd050) succeed. 00:09:57.787 [2024-04-18 17:26:24.202304] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cba150/0x1cfe6e0) succeed. 00:09:58.049 17:26:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.049 17:26:24 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:58.049 17:26:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:58.049 17:26:24 -- common/autotest_common.sh@10 -- # set +x 00:09:58.049 Malloc0 00:09:58.049 17:26:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.049 17:26:24 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:58.049 17:26:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:58.049 17:26:24 -- common/autotest_common.sh@10 -- # set +x 00:09:58.049 Malloc1 00:09:58.049 17:26:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.049 17:26:24 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:58.049 17:26:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:58.049 17:26:24 -- common/autotest_common.sh@10 -- # set +x 00:09:58.049 17:26:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.049 17:26:24 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:58.049 17:26:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:58.049 17:26:24 -- common/autotest_common.sh@10 -- # set +x 00:09:58.049 17:26:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.049 17:26:24 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:58.049 17:26:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:58.049 17:26:24 -- common/autotest_common.sh@10 -- # set +x 00:09:58.049 17:26:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.049 17:26:24 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:58.049 17:26:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:58.049 17:26:24 -- common/autotest_common.sh@10 -- # set +x 00:09:58.049 [2024-04-18 17:26:24.429013] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:58.049 17:26:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.049 17:26:24 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:58.049 17:26:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:58.049 17:26:24 -- common/autotest_common.sh@10 -- # set +x 00:09:58.049 17:26:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.049 17:26:24 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -a 192.168.100.8 -s 4420 00:09:58.049 00:09:58.049 Discovery Log Number of Records 2, Generation counter 2 00:09:58.049 =====Discovery Log Entry 0====== 00:09:58.049 trtype: rdma 00:09:58.049 adrfam: ipv4 00:09:58.049 subtype: current discovery subsystem 00:09:58.049 treq: not required 00:09:58.049 portid: 0 00:09:58.049 trsvcid: 4420 00:09:58.049 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:58.049 traddr: 192.168.100.8 00:09:58.049 eflags: explicit discovery connections, duplicate discovery information 00:09:58.049 rdma_prtype: not specified 00:09:58.049 rdma_qptype: connected 00:09:58.049 rdma_cms: rdma-cm 00:09:58.049 rdma_pkey: 0x0000 00:09:58.049 =====Discovery Log Entry 1====== 00:09:58.049 trtype: rdma 00:09:58.049 adrfam: ipv4 00:09:58.049 subtype: nvme subsystem 00:09:58.049 treq: not required 00:09:58.049 portid: 0 00:09:58.049 trsvcid: 4420 00:09:58.049 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:58.049 traddr: 192.168.100.8 00:09:58.049 eflags: none 00:09:58.049 rdma_prtype: not specified 00:09:58.049 rdma_qptype: connected 00:09:58.049 rdma_cms: rdma-cm 00:09:58.049 rdma_pkey: 0x0000 00:09:58.049 17:26:24 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:58.049 17:26:24 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:58.049 17:26:24 -- nvmf/common.sh@511 -- # local dev _ 00:09:58.049 17:26:24 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:58.049 17:26:24 -- nvmf/common.sh@510 -- # nvme list 00:09:58.049 17:26:24 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:58.049 17:26:24 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:58.049 17:26:24 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:58.049 17:26:24 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:58.049 17:26:24 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:58.049 17:26:24 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:02.237 17:26:28 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:02.237 17:26:28 -- common/autotest_common.sh@1184 -- # local i=0 00:10:02.237 17:26:28 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:02.237 17:26:28 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:10:02.237 17:26:28 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:10:02.237 17:26:28 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:03.613 17:26:30 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:03.613 17:26:30 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:03.613 17:26:30 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:03.613 17:26:30 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:10:03.613 17:26:30 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:03.613 17:26:30 -- common/autotest_common.sh@1194 -- # return 0 00:10:03.613 17:26:30 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:03.613 17:26:30 -- nvmf/common.sh@511 -- # local dev _ 00:10:03.613 17:26:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:03.613 17:26:30 -- nvmf/common.sh@510 -- # nvme list 00:10:03.613 17:26:30 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:10:03.613 17:26:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:03.613 17:26:30 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:10:03.613 17:26:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:03.613 17:26:30 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:03.613 17:26:30 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:10:03.613 17:26:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:03.613 17:26:30 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:03.613 17:26:30 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:10:03.613 17:26:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:03.613 17:26:30 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:03.613 /dev/nvme0n1 ]] 00:10:03.613 17:26:30 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:03.613 17:26:30 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:03.613 17:26:30 -- nvmf/common.sh@511 -- # local dev _ 00:10:03.613 17:26:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:03.613 17:26:30 -- nvmf/common.sh@510 -- # nvme list 00:10:03.613 17:26:30 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:10:03.613 17:26:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:03.613 17:26:30 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:10:03.613 17:26:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:03.613 17:26:30 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:03.613 17:26:30 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:10:03.613 17:26:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:03.613 17:26:30 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:03.613 17:26:30 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:10:03.613 17:26:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:03.613 17:26:30 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:03.613 17:26:30 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:06.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.146 17:26:32 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:06.146 17:26:32 -- common/autotest_common.sh@1205 -- # local i=0 00:10:06.146 17:26:32 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:06.146 17:26:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.146 17:26:32 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:06.146 17:26:32 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.146 17:26:32 -- common/autotest_common.sh@1217 -- # return 0 00:10:06.146 17:26:32 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:06.147 17:26:32 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.147 17:26:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:06.147 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:10:06.147 17:26:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:06.147 17:26:32 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:06.147 17:26:32 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:06.147 17:26:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:06.147 17:26:32 -- nvmf/common.sh@117 -- # sync 00:10:06.147 17:26:32 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:06.147 17:26:32 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:06.147 17:26:32 -- nvmf/common.sh@120 -- # set +e 00:10:06.147 17:26:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:06.147 17:26:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:06.147 rmmod nvme_rdma 00:10:06.147 rmmod nvme_fabrics 00:10:06.147 17:26:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:06.147 17:26:32 -- nvmf/common.sh@124 -- # set -e 00:10:06.147 17:26:32 -- nvmf/common.sh@125 -- # return 0 00:10:06.147 17:26:32 -- nvmf/common.sh@478 -- # '[' -n 1962540 ']' 00:10:06.147 17:26:32 -- nvmf/common.sh@479 -- # killprocess 1962540 00:10:06.147 17:26:32 -- common/autotest_common.sh@936 -- # '[' -z 1962540 ']' 00:10:06.147 17:26:32 -- common/autotest_common.sh@940 -- # kill -0 1962540 00:10:06.147 17:26:32 -- common/autotest_common.sh@941 -- # uname 00:10:06.147 17:26:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:06.147 17:26:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1962540 00:10:06.147 17:26:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:06.147 17:26:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:06.147 17:26:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1962540' 00:10:06.147 killing process with pid 1962540 00:10:06.147 17:26:32 -- common/autotest_common.sh@955 -- # kill 1962540 00:10:06.147 17:26:32 -- common/autotest_common.sh@960 -- # wait 1962540 00:10:06.405 17:26:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:06.405 17:26:32 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:06.405 00:10:06.405 real 0m11.127s 00:10:06.405 user 0m35.733s 00:10:06.405 sys 0m1.914s 00:10:06.405 17:26:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:06.405 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:10:06.405 ************************************ 00:10:06.405 END TEST nvmf_nvme_cli 00:10:06.405 ************************************ 00:10:06.405 17:26:32 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:10:06.405 17:26:32 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:10:06.405 17:26:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:06.405 17:26:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:06.405 17:26:32 -- common/autotest_common.sh@10 -- # set +x 00:10:06.663 ************************************ 00:10:06.663 START TEST nvmf_host_management 00:10:06.663 ************************************ 00:10:06.663 17:26:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:10:06.663 * Looking for test storage... 00:10:06.663 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:06.663 17:26:33 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.663 17:26:33 -- nvmf/common.sh@7 -- # uname -s 00:10:06.663 17:26:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.663 17:26:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.663 17:26:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.663 17:26:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.663 17:26:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.663 17:26:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.663 17:26:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.663 17:26:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.663 17:26:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.663 17:26:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.663 17:26:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:06.663 17:26:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:06.663 17:26:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.663 17:26:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.663 17:26:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.663 17:26:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.663 17:26:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:06.663 17:26:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.663 17:26:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.663 17:26:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.663 17:26:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.663 17:26:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.663 17:26:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.663 17:26:33 -- paths/export.sh@5 -- # export PATH 00:10:06.664 17:26:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.664 17:26:33 -- nvmf/common.sh@47 -- # : 0 00:10:06.664 17:26:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:06.664 17:26:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:06.664 17:26:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.664 17:26:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.664 17:26:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.664 17:26:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:06.664 17:26:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:06.664 17:26:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:06.664 17:26:33 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:06.664 17:26:33 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:06.664 17:26:33 -- target/host_management.sh@105 -- # nvmftestinit 00:10:06.664 17:26:33 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:06.664 17:26:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.664 17:26:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:06.664 17:26:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:06.664 17:26:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:06.664 17:26:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.664 17:26:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.664 17:26:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.664 17:26:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:06.664 17:26:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:06.664 17:26:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:06.664 17:26:33 -- common/autotest_common.sh@10 -- # set +x 00:10:08.569 17:26:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:08.569 17:26:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:08.569 17:26:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:08.569 17:26:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:08.569 17:26:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:08.569 17:26:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:08.569 17:26:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:08.569 17:26:34 -- nvmf/common.sh@295 -- # net_devs=() 00:10:08.569 17:26:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:08.569 17:26:34 -- nvmf/common.sh@296 -- # e810=() 00:10:08.569 17:26:34 -- nvmf/common.sh@296 -- # local -ga e810 00:10:08.569 17:26:34 -- nvmf/common.sh@297 -- # x722=() 00:10:08.569 17:26:34 -- nvmf/common.sh@297 -- # local -ga x722 00:10:08.569 17:26:34 -- nvmf/common.sh@298 -- # mlx=() 00:10:08.569 17:26:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:08.569 17:26:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.569 17:26:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.569 17:26:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.569 17:26:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.569 17:26:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.569 17:26:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.569 17:26:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.569 17:26:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.569 17:26:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.569 17:26:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.569 17:26:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.569 17:26:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:08.569 17:26:34 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:08.569 17:26:34 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:08.569 17:26:34 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:08.569 17:26:34 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:08.569 17:26:34 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:08.569 17:26:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:08.569 17:26:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:08.569 17:26:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:10:08.569 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:10:08.569 17:26:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:08.569 17:26:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:08.569 17:26:34 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:08.569 17:26:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:08.569 17:26:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:08.569 17:26:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:10:08.569 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:10:08.569 17:26:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:08.569 17:26:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:08.569 17:26:34 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:08.569 17:26:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:08.569 17:26:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:08.569 17:26:34 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:08.569 17:26:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:08.569 17:26:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.569 17:26:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:08.569 17:26:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.569 17:26:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:10:08.569 Found net devices under 0000:09:00.0: mlx_0_0 00:10:08.569 17:26:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.569 17:26:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:08.569 17:26:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.569 17:26:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:08.569 17:26:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.569 17:26:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:10:08.569 Found net devices under 0000:09:00.1: mlx_0_1 00:10:08.569 17:26:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.569 17:26:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:08.569 17:26:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:08.569 17:26:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:08.569 17:26:34 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:08.569 17:26:34 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:08.569 17:26:34 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:08.569 17:26:34 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:08.569 17:26:34 -- nvmf/common.sh@58 -- # uname 00:10:08.569 17:26:34 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:08.569 17:26:34 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:08.569 17:26:34 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:08.569 17:26:34 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:08.569 17:26:34 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:08.569 17:26:34 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:08.569 17:26:34 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:08.569 17:26:34 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:08.569 17:26:34 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:08.569 17:26:34 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:08.569 17:26:34 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:08.569 17:26:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:08.569 17:26:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:08.569 17:26:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:08.569 17:26:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:08.569 17:26:34 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:08.569 17:26:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:08.569 17:26:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:08.569 17:26:34 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:08.569 17:26:34 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:08.569 17:26:34 -- nvmf/common.sh@105 -- # continue 2 00:10:08.569 17:26:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:08.569 17:26:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:08.569 17:26:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:08.569 17:26:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:08.569 17:26:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:08.569 17:26:34 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:08.569 17:26:34 -- nvmf/common.sh@105 -- # continue 2 00:10:08.569 17:26:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:08.569 17:26:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:08.569 17:26:34 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:08.569 17:26:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:08.569 17:26:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:08.569 17:26:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:08.569 17:26:34 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:08.569 17:26:34 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:08.569 17:26:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:08.569 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:08.569 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:10:08.569 altname enp9s0f0np0 00:10:08.569 inet 192.168.100.8/24 scope global mlx_0_0 00:10:08.569 valid_lft forever preferred_lft forever 00:10:08.569 17:26:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:08.569 17:26:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:08.569 17:26:34 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:08.569 17:26:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:08.569 17:26:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:08.569 17:26:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:08.569 17:26:35 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:08.569 17:26:35 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:08.569 17:26:35 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:08.570 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:08.570 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:10:08.570 altname enp9s0f1np1 00:10:08.570 inet 192.168.100.9/24 scope global mlx_0_1 00:10:08.570 valid_lft forever preferred_lft forever 00:10:08.570 17:26:35 -- nvmf/common.sh@411 -- # return 0 00:10:08.570 17:26:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:08.570 17:26:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:08.570 17:26:35 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:08.570 17:26:35 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:08.570 17:26:35 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:08.570 17:26:35 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:08.570 17:26:35 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:08.570 17:26:35 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:08.570 17:26:35 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:08.570 17:26:35 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:08.570 17:26:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:08.570 17:26:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:08.570 17:26:35 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:08.570 17:26:35 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:08.570 17:26:35 -- nvmf/common.sh@105 -- # continue 2 00:10:08.570 17:26:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:08.570 17:26:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:08.570 17:26:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:08.570 17:26:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:08.570 17:26:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:08.570 17:26:35 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:08.570 17:26:35 -- nvmf/common.sh@105 -- # continue 2 00:10:08.570 17:26:35 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:08.570 17:26:35 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:08.570 17:26:35 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:08.570 17:26:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:08.570 17:26:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:08.570 17:26:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:08.570 17:26:35 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:08.570 17:26:35 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:08.570 17:26:35 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:08.570 17:26:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:08.570 17:26:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:08.570 17:26:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:08.570 17:26:35 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:08.570 192.168.100.9' 00:10:08.570 17:26:35 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:08.570 192.168.100.9' 00:10:08.570 17:26:35 -- nvmf/common.sh@446 -- # head -n 1 00:10:08.570 17:26:35 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:08.570 17:26:35 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:08.570 192.168.100.9' 00:10:08.570 17:26:35 -- nvmf/common.sh@447 -- # tail -n +2 00:10:08.570 17:26:35 -- nvmf/common.sh@447 -- # head -n 1 00:10:08.570 17:26:35 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:08.570 17:26:35 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:08.570 17:26:35 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:08.570 17:26:35 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:08.570 17:26:35 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:08.570 17:26:35 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:08.570 17:26:35 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:10:08.570 17:26:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:08.570 17:26:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:08.570 17:26:35 -- common/autotest_common.sh@10 -- # set +x 00:10:08.828 ************************************ 00:10:08.828 START TEST nvmf_host_management 00:10:08.828 ************************************ 00:10:08.828 17:26:35 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:10:08.828 17:26:35 -- target/host_management.sh@69 -- # starttarget 00:10:08.828 17:26:35 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:08.828 17:26:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:08.828 17:26:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:08.828 17:26:35 -- common/autotest_common.sh@10 -- # set +x 00:10:08.828 17:26:35 -- nvmf/common.sh@470 -- # nvmfpid=1965341 00:10:08.828 17:26:35 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:08.828 17:26:35 -- nvmf/common.sh@471 -- # waitforlisten 1965341 00:10:08.828 17:26:35 -- common/autotest_common.sh@817 -- # '[' -z 1965341 ']' 00:10:08.828 17:26:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.828 17:26:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:08.828 17:26:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.828 17:26:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:08.828 17:26:35 -- common/autotest_common.sh@10 -- # set +x 00:10:08.828 [2024-04-18 17:26:35.207573] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:10:08.828 [2024-04-18 17:26:35.207645] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.828 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.828 [2024-04-18 17:26:35.268980] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.086 [2024-04-18 17:26:35.383912] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.087 [2024-04-18 17:26:35.383974] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.087 [2024-04-18 17:26:35.383991] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.087 [2024-04-18 17:26:35.384004] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.087 [2024-04-18 17:26:35.384016] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.087 [2024-04-18 17:26:35.384119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.087 [2024-04-18 17:26:35.384279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.087 [2024-04-18 17:26:35.384311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:09.087 [2024-04-18 17:26:35.384312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.654 17:26:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:09.654 17:26:36 -- common/autotest_common.sh@850 -- # return 0 00:10:09.654 17:26:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:09.654 17:26:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:09.654 17:26:36 -- common/autotest_common.sh@10 -- # set +x 00:10:09.654 17:26:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.654 17:26:36 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:09.654 17:26:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.654 17:26:36 -- common/autotest_common.sh@10 -- # set +x 00:10:09.654 [2024-04-18 17:26:36.189277] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bebe40/0x1bf0330) succeed. 00:10:09.913 [2024-04-18 17:26:36.200390] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bed430/0x1c319c0) succeed. 00:10:09.913 17:26:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:09.913 17:26:36 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:09.913 17:26:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:09.913 17:26:36 -- common/autotest_common.sh@10 -- # set +x 00:10:09.913 17:26:36 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:09.913 17:26:36 -- target/host_management.sh@23 -- # cat 00:10:09.913 17:26:36 -- target/host_management.sh@30 -- # rpc_cmd 00:10:09.913 17:26:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:09.913 17:26:36 -- common/autotest_common.sh@10 -- # set +x 00:10:09.913 Malloc0 00:10:09.913 [2024-04-18 17:26:36.408306] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:09.913 17:26:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:09.913 17:26:36 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:09.913 17:26:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:09.913 17:26:36 -- common/autotest_common.sh@10 -- # set +x 00:10:09.913 17:26:36 -- target/host_management.sh@73 -- # perfpid=1965515 00:10:09.913 17:26:36 -- target/host_management.sh@74 -- # waitforlisten 1965515 /var/tmp/bdevperf.sock 00:10:09.913 17:26:36 -- common/autotest_common.sh@817 -- # '[' -z 1965515 ']' 00:10:09.913 17:26:36 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:09.913 17:26:36 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:09.913 17:26:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:09.913 17:26:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:09.913 17:26:36 -- nvmf/common.sh@521 -- # config=() 00:10:09.913 17:26:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:09.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:09.913 17:26:36 -- nvmf/common.sh@521 -- # local subsystem config 00:10:09.913 17:26:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:09.913 17:26:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:09.913 17:26:36 -- common/autotest_common.sh@10 -- # set +x 00:10:09.913 17:26:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:09.913 { 00:10:09.913 "params": { 00:10:09.913 "name": "Nvme$subsystem", 00:10:09.913 "trtype": "$TEST_TRANSPORT", 00:10:09.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:09.913 "adrfam": "ipv4", 00:10:09.913 "trsvcid": "$NVMF_PORT", 00:10:09.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:09.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:09.913 "hdgst": ${hdgst:-false}, 00:10:09.913 "ddgst": ${ddgst:-false} 00:10:09.913 }, 00:10:09.913 "method": "bdev_nvme_attach_controller" 00:10:09.913 } 00:10:09.913 EOF 00:10:09.913 )") 00:10:09.913 17:26:36 -- nvmf/common.sh@543 -- # cat 00:10:09.913 17:26:36 -- nvmf/common.sh@545 -- # jq . 00:10:09.913 17:26:36 -- nvmf/common.sh@546 -- # IFS=, 00:10:09.913 17:26:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:09.913 "params": { 00:10:09.913 "name": "Nvme0", 00:10:09.913 "trtype": "rdma", 00:10:09.913 "traddr": "192.168.100.8", 00:10:09.913 "adrfam": "ipv4", 00:10:09.913 "trsvcid": "4420", 00:10:09.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:09.913 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:09.913 "hdgst": false, 00:10:09.913 "ddgst": false 00:10:09.913 }, 00:10:09.913 "method": "bdev_nvme_attach_controller" 00:10:09.913 }' 00:10:10.174 [2024-04-18 17:26:36.484895] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:10:10.174 [2024-04-18 17:26:36.484978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1965515 ] 00:10:10.174 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.174 [2024-04-18 17:26:36.546200] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.174 [2024-04-18 17:26:36.654565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.432 Running I/O for 10 seconds... 00:10:10.432 17:26:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:10.432 17:26:36 -- common/autotest_common.sh@850 -- # return 0 00:10:10.432 17:26:36 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:10.432 17:26:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:10.432 17:26:36 -- common/autotest_common.sh@10 -- # set +x 00:10:10.432 17:26:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:10.432 17:26:36 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:10.432 17:26:36 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:10.432 17:26:36 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:10.432 17:26:36 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:10.432 17:26:36 -- target/host_management.sh@52 -- # local ret=1 00:10:10.432 17:26:36 -- target/host_management.sh@53 -- # local i 00:10:10.432 17:26:36 -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:10.432 17:26:36 -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:10.432 17:26:36 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:10.432 17:26:36 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:10.432 17:26:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:10.432 17:26:36 -- common/autotest_common.sh@10 -- # set +x 00:10:10.432 17:26:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:10.432 17:26:36 -- target/host_management.sh@55 -- # read_io_count=114 00:10:10.432 17:26:36 -- target/host_management.sh@58 -- # '[' 114 -ge 100 ']' 00:10:10.432 17:26:36 -- target/host_management.sh@59 -- # ret=0 00:10:10.432 17:26:36 -- target/host_management.sh@60 -- # break 00:10:10.432 17:26:36 -- target/host_management.sh@64 -- # return 0 00:10:10.432 17:26:36 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:10.432 17:26:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:10.432 17:26:36 -- common/autotest_common.sh@10 -- # set +x 00:10:10.432 17:26:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:10.432 17:26:36 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:10.432 17:26:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:10.432 17:26:36 -- common/autotest_common.sh@10 -- # set +x 00:10:10.432 17:26:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:10.432 17:26:36 -- target/host_management.sh@87 -- # sleep 1 00:10:11.816 [2024-04-18 17:26:37.970081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x187400 00:10:11.816 [2024-04-18 17:26:37.970143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.816 [2024-04-18 17:26:37.970174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x187400 00:10:11.816 [2024-04-18 17:26:37.970190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.816 [2024-04-18 17:26:37.970207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x187400 00:10:11.816 [2024-04-18 17:26:37.970222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.816 [2024-04-18 17:26:37.970237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x187400 00:10:11.816 [2024-04-18 17:26:37.970251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.816 [2024-04-18 17:26:37.970267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x187400 00:10:11.816 [2024-04-18 17:26:37.970281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x187400 00:10:11.817 [2024-04-18 17:26:37.970310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x187400 00:10:11.817 [2024-04-18 17:26:37.970339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x187400 00:10:11.817 [2024-04-18 17:26:37.970368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x187400 00:10:11.817 [2024-04-18 17:26:37.970405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x187400 00:10:11.817 [2024-04-18 17:26:37.970435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x187400 00:10:11.817 [2024-04-18 17:26:37.970464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x187400 00:10:11.817 [2024-04-18 17:26:37.970502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x187400 00:10:11.817 [2024-04-18 17:26:37.970531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x187400 00:10:11.817 [2024-04-18 17:26:37.970561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x187700 00:10:11.817 [2024-04-18 17:26:37.970590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x187700 00:10:11.817 [2024-04-18 17:26:37.970620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x187700 00:10:11.817 [2024-04-18 17:26:37.970648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x187700 00:10:11.817 [2024-04-18 17:26:37.970678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x187700 00:10:11.817 [2024-04-18 17:26:37.970707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x187700 00:10:11.817 [2024-04-18 17:26:37.970736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x187700 00:10:11.817 [2024-04-18 17:26:37.970765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x187700 00:10:11.817 [2024-04-18 17:26:37.970794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x187700 00:10:11.817 [2024-04-18 17:26:37.970827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x187700 00:10:11.817 [2024-04-18 17:26:37.970857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x187700 00:10:11.817 [2024-04-18 17:26:37.970885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x187700 00:10:11.817 [2024-04-18 17:26:37.970914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x187700 00:10:11.817 [2024-04-18 17:26:37.970943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x187700 00:10:11.817 [2024-04-18 17:26:37.970972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.970989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x187600 00:10:11.817 [2024-04-18 17:26:37.971003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.971018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x187600 00:10:11.817 [2024-04-18 17:26:37.971032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.971047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x187600 00:10:11.817 [2024-04-18 17:26:37.971060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.971075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x187600 00:10:11.817 [2024-04-18 17:26:37.971089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.971104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x187600 00:10:11.817 [2024-04-18 17:26:37.971118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.971136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x187600 00:10:11.817 [2024-04-18 17:26:37.971151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.971166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x187600 00:10:11.817 [2024-04-18 17:26:37.971179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.971194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x187600 00:10:11.817 [2024-04-18 17:26:37.971208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.971223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x187600 00:10:11.817 [2024-04-18 17:26:37.971236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.817 [2024-04-18 17:26:37.971251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x187600 00:10:11.818 [2024-04-18 17:26:37.971265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x187600 00:10:11.818 [2024-04-18 17:26:37.971293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x187600 00:10:11.818 [2024-04-18 17:26:37.971321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x187600 00:10:11.818 [2024-04-18 17:26:37.971350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x187600 00:10:11.818 [2024-04-18 17:26:37.971378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x187600 00:10:11.818 [2024-04-18 17:26:37.971414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x187500 00:10:11.818 [2024-04-18 17:26:37.971443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x187500 00:10:11.818 [2024-04-18 17:26:37.971475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x187500 00:10:11.818 [2024-04-18 17:26:37.971505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x187500 00:10:11.818 [2024-04-18 17:26:37.971534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x187500 00:10:11.818 [2024-04-18 17:26:37.971563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x187500 00:10:11.818 [2024-04-18 17:26:37.971591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x187500 00:10:11.818 [2024-04-18 17:26:37.971621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x187500 00:10:11.818 [2024-04-18 17:26:37.971652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x187500 00:10:11.818 [2024-04-18 17:26:37.971682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x187500 00:10:11.818 [2024-04-18 17:26:37.971711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x187500 00:10:11.818 [2024-04-18 17:26:37.971740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x187500 00:10:11.818 [2024-04-18 17:26:37.971777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x187500 00:10:11.818 [2024-04-18 17:26:37.971811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x187500 00:10:11.818 [2024-04-18 17:26:37.971841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x187500 00:10:11.818 [2024-04-18 17:26:37.971870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x187000 00:10:11.818 [2024-04-18 17:26:37.971899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x187000 00:10:11.818 [2024-04-18 17:26:37.971928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x187000 00:10:11.818 [2024-04-18 17:26:37.971957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.971973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x187000 00:10:11.818 [2024-04-18 17:26:37.971987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.972002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x187000 00:10:11.818 [2024-04-18 17:26:37.972017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 [2024-04-18 17:26:37.972032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x187400 00:10:11.818 [2024-04-18 17:26:37.972046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:9580 p:0 m:0 dnr:0 00:10:11.818 17:26:37 -- target/host_management.sh@91 -- # kill -9 1965515 00:10:11.818 17:26:37 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:11.818 17:26:37 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:11.818 17:26:37 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:11.818 17:26:37 -- nvmf/common.sh@521 -- # config=() 00:10:11.818 17:26:37 -- nvmf/common.sh@521 -- # local subsystem config 00:10:11.818 17:26:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:11.818 17:26:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:11.818 { 00:10:11.818 "params": { 00:10:11.818 "name": "Nvme$subsystem", 00:10:11.818 "trtype": "$TEST_TRANSPORT", 00:10:11.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:11.818 "adrfam": "ipv4", 00:10:11.818 "trsvcid": "$NVMF_PORT", 00:10:11.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:11.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:11.818 "hdgst": ${hdgst:-false}, 00:10:11.818 "ddgst": ${ddgst:-false} 00:10:11.818 }, 00:10:11.818 "method": "bdev_nvme_attach_controller" 00:10:11.818 } 00:10:11.818 EOF 00:10:11.818 )") 00:10:11.818 17:26:37 -- nvmf/common.sh@543 -- # cat 00:10:11.818 17:26:37 -- nvmf/common.sh@545 -- # jq . 00:10:11.818 17:26:37 -- nvmf/common.sh@546 -- # IFS=, 00:10:11.818 17:26:37 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:11.818 "params": { 00:10:11.818 "name": "Nvme0", 00:10:11.818 "trtype": "rdma", 00:10:11.818 "traddr": "192.168.100.8", 00:10:11.818 "adrfam": "ipv4", 00:10:11.818 "trsvcid": "4420", 00:10:11.818 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:11.818 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:11.818 "hdgst": false, 00:10:11.818 "ddgst": false 00:10:11.818 }, 00:10:11.819 "method": "bdev_nvme_attach_controller" 00:10:11.819 }' 00:10:11.819 [2024-04-18 17:26:38.015784] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:10:11.819 [2024-04-18 17:26:38.015856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1965669 ] 00:10:11.819 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.819 [2024-04-18 17:26:38.075308] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.819 [2024-04-18 17:26:38.187174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.078 Running I/O for 1 seconds... 00:10:13.017 00:10:13.017 Latency(us) 00:10:13.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.017 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:13.017 Verification LBA range: start 0x0 length 0x400 00:10:13.017 Nvme0n1 : 1.02 2566.13 160.38 0.00 0.00 24407.00 1280.38 41943.04 00:10:13.017 =================================================================================================================== 00:10:13.017 Total : 2566.13 160.38 0.00 0.00 24407.00 1280.38 41943.04 00:10:13.275 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1965515 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:10:13.275 17:26:39 -- target/host_management.sh@102 -- # stoptarget 00:10:13.275 17:26:39 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:13.275 17:26:39 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:13.275 17:26:39 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:13.275 17:26:39 -- target/host_management.sh@40 -- # nvmftestfini 00:10:13.275 17:26:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:13.275 17:26:39 -- nvmf/common.sh@117 -- # sync 00:10:13.275 17:26:39 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:13.275 17:26:39 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:13.275 17:26:39 -- nvmf/common.sh@120 -- # set +e 00:10:13.275 17:26:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:13.275 17:26:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:13.275 rmmod nvme_rdma 00:10:13.275 rmmod nvme_fabrics 00:10:13.275 17:26:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:13.275 17:26:39 -- nvmf/common.sh@124 -- # set -e 00:10:13.275 17:26:39 -- nvmf/common.sh@125 -- # return 0 00:10:13.275 17:26:39 -- nvmf/common.sh@478 -- # '[' -n 1965341 ']' 00:10:13.275 17:26:39 -- nvmf/common.sh@479 -- # killprocess 1965341 00:10:13.275 17:26:39 -- common/autotest_common.sh@936 -- # '[' -z 1965341 ']' 00:10:13.275 17:26:39 -- common/autotest_common.sh@940 -- # kill -0 1965341 00:10:13.275 17:26:39 -- common/autotest_common.sh@941 -- # uname 00:10:13.275 17:26:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:13.275 17:26:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1965341 00:10:13.275 17:26:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:13.275 17:26:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:13.275 17:26:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1965341' 00:10:13.275 killing process with pid 1965341 00:10:13.275 17:26:39 -- common/autotest_common.sh@955 -- # kill 1965341 00:10:13.275 17:26:39 -- common/autotest_common.sh@960 -- # wait 1965341 00:10:13.845 [2024-04-18 17:26:40.095591] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:13.845 17:26:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:13.845 17:26:40 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:13.845 00:10:13.845 real 0m4.962s 00:10:13.845 user 0m21.677s 00:10:13.845 sys 0m0.921s 00:10:13.845 17:26:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:13.845 17:26:40 -- common/autotest_common.sh@10 -- # set +x 00:10:13.845 ************************************ 00:10:13.845 END TEST nvmf_host_management 00:10:13.845 ************************************ 00:10:13.845 17:26:40 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:13.845 00:10:13.845 real 0m7.117s 00:10:13.845 user 0m22.468s 00:10:13.845 sys 0m2.375s 00:10:13.845 17:26:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:13.845 17:26:40 -- common/autotest_common.sh@10 -- # set +x 00:10:13.845 ************************************ 00:10:13.845 END TEST nvmf_host_management 00:10:13.845 ************************************ 00:10:13.845 17:26:40 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:10:13.845 17:26:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:13.845 17:26:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:13.845 17:26:40 -- common/autotest_common.sh@10 -- # set +x 00:10:13.845 ************************************ 00:10:13.845 START TEST nvmf_lvol 00:10:13.845 ************************************ 00:10:13.845 17:26:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:10:13.845 * Looking for test storage... 00:10:13.845 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:13.845 17:26:40 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.845 17:26:40 -- nvmf/common.sh@7 -- # uname -s 00:10:13.845 17:26:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.845 17:26:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.845 17:26:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.845 17:26:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.845 17:26:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.845 17:26:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.845 17:26:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.845 17:26:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.845 17:26:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.845 17:26:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.845 17:26:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:13.845 17:26:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:13.845 17:26:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.845 17:26:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.845 17:26:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.845 17:26:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.845 17:26:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:13.845 17:26:40 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.845 17:26:40 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.845 17:26:40 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.845 17:26:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.845 17:26:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.845 17:26:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.845 17:26:40 -- paths/export.sh@5 -- # export PATH 00:10:13.845 17:26:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.845 17:26:40 -- nvmf/common.sh@47 -- # : 0 00:10:13.845 17:26:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.845 17:26:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.845 17:26:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.845 17:26:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.845 17:26:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.845 17:26:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.845 17:26:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.845 17:26:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.845 17:26:40 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:13.845 17:26:40 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:13.845 17:26:40 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:13.845 17:26:40 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:13.845 17:26:40 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:13.845 17:26:40 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:13.845 17:26:40 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:13.845 17:26:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.845 17:26:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:13.845 17:26:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:13.845 17:26:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:13.845 17:26:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.845 17:26:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:13.845 17:26:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.845 17:26:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:13.845 17:26:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:13.845 17:26:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:13.845 17:26:40 -- common/autotest_common.sh@10 -- # set +x 00:10:16.381 17:26:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:16.381 17:26:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:16.381 17:26:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:16.381 17:26:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:16.381 17:26:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:16.381 17:26:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:16.381 17:26:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:16.381 17:26:42 -- nvmf/common.sh@295 -- # net_devs=() 00:10:16.381 17:26:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:16.381 17:26:42 -- nvmf/common.sh@296 -- # e810=() 00:10:16.381 17:26:42 -- nvmf/common.sh@296 -- # local -ga e810 00:10:16.381 17:26:42 -- nvmf/common.sh@297 -- # x722=() 00:10:16.381 17:26:42 -- nvmf/common.sh@297 -- # local -ga x722 00:10:16.381 17:26:42 -- nvmf/common.sh@298 -- # mlx=() 00:10:16.381 17:26:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:16.381 17:26:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.381 17:26:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.381 17:26:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.381 17:26:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.381 17:26:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.381 17:26:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.381 17:26:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.381 17:26:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.381 17:26:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.381 17:26:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.381 17:26:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.381 17:26:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:16.381 17:26:42 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:16.381 17:26:42 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:16.381 17:26:42 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:16.381 17:26:42 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:16.381 17:26:42 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:16.381 17:26:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:16.381 17:26:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.381 17:26:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:10:16.381 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:10:16.381 17:26:42 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:16.381 17:26:42 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:16.381 17:26:42 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:16.381 17:26:42 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:16.381 17:26:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.381 17:26:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:10:16.381 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:10:16.381 17:26:42 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:16.381 17:26:42 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:16.381 17:26:42 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:16.381 17:26:42 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:16.381 17:26:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:16.381 17:26:42 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:16.381 17:26:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.382 17:26:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.382 17:26:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:16.382 17:26:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.382 17:26:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:10:16.382 Found net devices under 0000:09:00.0: mlx_0_0 00:10:16.382 17:26:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.382 17:26:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.382 17:26:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.382 17:26:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:16.382 17:26:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.382 17:26:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:10:16.382 Found net devices under 0000:09:00.1: mlx_0_1 00:10:16.382 17:26:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.382 17:26:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:16.382 17:26:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:16.382 17:26:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:16.382 17:26:42 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:16.382 17:26:42 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:16.382 17:26:42 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:16.382 17:26:42 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:16.382 17:26:42 -- nvmf/common.sh@58 -- # uname 00:10:16.382 17:26:42 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:16.382 17:26:42 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:16.382 17:26:42 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:16.382 17:26:42 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:16.382 17:26:42 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:16.382 17:26:42 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:16.382 17:26:42 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:16.382 17:26:42 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:16.382 17:26:42 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:16.382 17:26:42 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:16.382 17:26:42 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:16.382 17:26:42 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:16.382 17:26:42 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:16.382 17:26:42 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:16.382 17:26:42 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:16.382 17:26:42 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:16.382 17:26:42 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:16.382 17:26:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.382 17:26:42 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:16.382 17:26:42 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:16.382 17:26:42 -- nvmf/common.sh@105 -- # continue 2 00:10:16.382 17:26:42 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:16.382 17:26:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.382 17:26:42 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:16.382 17:26:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.382 17:26:42 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:16.382 17:26:42 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:16.382 17:26:42 -- nvmf/common.sh@105 -- # continue 2 00:10:16.382 17:26:42 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:16.382 17:26:42 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:16.382 17:26:42 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:16.382 17:26:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:16.382 17:26:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:16.382 17:26:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:16.382 17:26:42 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:16.382 17:26:42 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:16.382 17:26:42 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:16.382 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:16.382 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:10:16.382 altname enp9s0f0np0 00:10:16.382 inet 192.168.100.8/24 scope global mlx_0_0 00:10:16.382 valid_lft forever preferred_lft forever 00:10:16.382 17:26:42 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:16.382 17:26:42 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:16.382 17:26:42 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:16.382 17:26:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:16.382 17:26:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:16.382 17:26:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:16.382 17:26:42 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:16.382 17:26:42 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:16.382 17:26:42 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:16.382 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:16.382 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:10:16.382 altname enp9s0f1np1 00:10:16.382 inet 192.168.100.9/24 scope global mlx_0_1 00:10:16.382 valid_lft forever preferred_lft forever 00:10:16.382 17:26:42 -- nvmf/common.sh@411 -- # return 0 00:10:16.382 17:26:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:16.382 17:26:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:16.382 17:26:42 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:16.382 17:26:42 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:16.382 17:26:42 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:16.382 17:26:42 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:16.382 17:26:42 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:16.382 17:26:42 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:16.382 17:26:42 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:16.382 17:26:42 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:16.382 17:26:42 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:16.382 17:26:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.382 17:26:42 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:16.382 17:26:42 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:16.382 17:26:42 -- nvmf/common.sh@105 -- # continue 2 00:10:16.382 17:26:42 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:16.382 17:26:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.382 17:26:42 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:16.382 17:26:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:16.382 17:26:42 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:16.382 17:26:42 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:16.382 17:26:42 -- nvmf/common.sh@105 -- # continue 2 00:10:16.382 17:26:42 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:16.382 17:26:42 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:16.382 17:26:42 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:16.382 17:26:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:16.382 17:26:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:16.382 17:26:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:16.382 17:26:42 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:16.382 17:26:42 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:16.382 17:26:42 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:16.382 17:26:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:16.382 17:26:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:16.382 17:26:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:16.382 17:26:42 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:16.382 192.168.100.9' 00:10:16.382 17:26:42 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:16.382 192.168.100.9' 00:10:16.382 17:26:42 -- nvmf/common.sh@446 -- # head -n 1 00:10:16.382 17:26:42 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:16.382 17:26:42 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:16.382 192.168.100.9' 00:10:16.382 17:26:42 -- nvmf/common.sh@447 -- # tail -n +2 00:10:16.382 17:26:42 -- nvmf/common.sh@447 -- # head -n 1 00:10:16.382 17:26:42 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:16.382 17:26:42 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:16.382 17:26:42 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:16.382 17:26:42 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:16.382 17:26:42 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:16.382 17:26:42 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:16.382 17:26:42 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:16.382 17:26:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:16.382 17:26:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:16.382 17:26:42 -- common/autotest_common.sh@10 -- # set +x 00:10:16.382 17:26:42 -- nvmf/common.sh@470 -- # nvmfpid=1967624 00:10:16.382 17:26:42 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:16.382 17:26:42 -- nvmf/common.sh@471 -- # waitforlisten 1967624 00:10:16.382 17:26:42 -- common/autotest_common.sh@817 -- # '[' -z 1967624 ']' 00:10:16.382 17:26:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.382 17:26:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:16.382 17:26:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.382 17:26:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:16.382 17:26:42 -- common/autotest_common.sh@10 -- # set +x 00:10:16.382 [2024-04-18 17:26:42.492223] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:10:16.382 [2024-04-18 17:26:42.492306] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.382 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.382 [2024-04-18 17:26:42.548622] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.383 [2024-04-18 17:26:42.656672] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.383 [2024-04-18 17:26:42.656737] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.383 [2024-04-18 17:26:42.656766] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.383 [2024-04-18 17:26:42.656778] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.383 [2024-04-18 17:26:42.656788] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.383 [2024-04-18 17:26:42.656851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.383 [2024-04-18 17:26:42.656911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.383 [2024-04-18 17:26:42.656913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.383 17:26:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:16.383 17:26:42 -- common/autotest_common.sh@850 -- # return 0 00:10:16.383 17:26:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:16.383 17:26:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:16.383 17:26:42 -- common/autotest_common.sh@10 -- # set +x 00:10:16.383 17:26:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.383 17:26:42 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:16.648 [2024-04-18 17:26:43.058041] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1979fc0/0x197e4b0) succeed. 00:10:16.648 [2024-04-18 17:26:43.068209] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x197b510/0x19bfb40) succeed. 00:10:16.907 17:26:43 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:17.165 17:26:43 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:17.165 17:26:43 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:17.423 17:26:43 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:17.423 17:26:43 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:17.681 17:26:44 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:17.998 17:26:44 -- target/nvmf_lvol.sh@29 -- # lvs=3d4e7819-c135-467e-9894-d6266c6f5a19 00:10:17.998 17:26:44 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3d4e7819-c135-467e-9894-d6266c6f5a19 lvol 20 00:10:18.258 17:26:44 -- target/nvmf_lvol.sh@32 -- # lvol=6df188b5-1acc-46fc-9b23-a3e8812862f0 00:10:18.258 17:26:44 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:18.519 17:26:44 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6df188b5-1acc-46fc-9b23-a3e8812862f0 00:10:18.780 17:26:45 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:18.780 [2024-04-18 17:26:45.307121] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:19.039 17:26:45 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:19.039 17:26:45 -- target/nvmf_lvol.sh@42 -- # perf_pid=1968051 00:10:19.039 17:26:45 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:19.039 17:26:45 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:19.299 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.236 17:26:46 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6df188b5-1acc-46fc-9b23-a3e8812862f0 MY_SNAPSHOT 00:10:20.494 17:26:46 -- target/nvmf_lvol.sh@47 -- # snapshot=2f3c233d-f248-4f50-b9ca-bc649c3de091 00:10:20.494 17:26:46 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6df188b5-1acc-46fc-9b23-a3e8812862f0 30 00:10:20.753 17:26:47 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2f3c233d-f248-4f50-b9ca-bc649c3de091 MY_CLONE 00:10:20.753 17:26:47 -- target/nvmf_lvol.sh@49 -- # clone=c9101376-7f2b-432c-b95c-f86dc0988c08 00:10:20.753 17:26:47 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c9101376-7f2b-432c-b95c-f86dc0988c08 00:10:21.323 17:26:47 -- target/nvmf_lvol.sh@53 -- # wait 1968051 00:10:31.303 Initializing NVMe Controllers 00:10:31.303 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:10:31.303 Controller IO queue size 128, less than required. 00:10:31.303 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:31.303 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:31.303 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:31.303 Initialization complete. Launching workers. 00:10:31.303 ======================================================== 00:10:31.303 Latency(us) 00:10:31.303 Device Information : IOPS MiB/s Average min max 00:10:31.303 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15101.00 58.99 8479.17 3200.27 46174.62 00:10:31.303 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15097.20 58.97 8480.77 3218.06 48966.15 00:10:31.303 ======================================================== 00:10:31.303 Total : 30198.19 117.96 8479.97 3200.27 48966.15 00:10:31.303 00:10:31.303 17:26:56 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:31.303 17:26:57 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6df188b5-1acc-46fc-9b23-a3e8812862f0 00:10:31.303 17:26:57 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3d4e7819-c135-467e-9894-d6266c6f5a19 00:10:31.303 17:26:57 -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:31.303 17:26:57 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:31.303 17:26:57 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:31.303 17:26:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:31.303 17:26:57 -- nvmf/common.sh@117 -- # sync 00:10:31.303 17:26:57 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:31.303 17:26:57 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:31.303 17:26:57 -- nvmf/common.sh@120 -- # set +e 00:10:31.303 17:26:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:31.303 17:26:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:31.303 rmmod nvme_rdma 00:10:31.303 rmmod nvme_fabrics 00:10:31.303 17:26:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:31.303 17:26:57 -- nvmf/common.sh@124 -- # set -e 00:10:31.303 17:26:57 -- nvmf/common.sh@125 -- # return 0 00:10:31.303 17:26:57 -- nvmf/common.sh@478 -- # '[' -n 1967624 ']' 00:10:31.303 17:26:57 -- nvmf/common.sh@479 -- # killprocess 1967624 00:10:31.303 17:26:57 -- common/autotest_common.sh@936 -- # '[' -z 1967624 ']' 00:10:31.303 17:26:57 -- common/autotest_common.sh@940 -- # kill -0 1967624 00:10:31.303 17:26:57 -- common/autotest_common.sh@941 -- # uname 00:10:31.303 17:26:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:31.303 17:26:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1967624 00:10:31.303 17:26:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:31.303 17:26:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:31.303 17:26:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1967624' 00:10:31.303 killing process with pid 1967624 00:10:31.303 17:26:57 -- common/autotest_common.sh@955 -- # kill 1967624 00:10:31.303 17:26:57 -- common/autotest_common.sh@960 -- # wait 1967624 00:10:31.871 17:26:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:31.871 17:26:58 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:31.871 00:10:31.871 real 0m17.965s 00:10:31.871 user 1m12.694s 00:10:31.871 sys 0m2.780s 00:10:31.871 17:26:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:31.871 17:26:58 -- common/autotest_common.sh@10 -- # set +x 00:10:31.871 ************************************ 00:10:31.871 END TEST nvmf_lvol 00:10:31.871 ************************************ 00:10:31.871 17:26:58 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:10:31.871 17:26:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:31.871 17:26:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:31.871 17:26:58 -- common/autotest_common.sh@10 -- # set +x 00:10:31.871 ************************************ 00:10:31.871 START TEST nvmf_lvs_grow 00:10:31.871 ************************************ 00:10:31.871 17:26:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:10:32.129 * Looking for test storage... 00:10:32.129 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:32.129 17:26:58 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.130 17:26:58 -- nvmf/common.sh@7 -- # uname -s 00:10:32.130 17:26:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.130 17:26:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.130 17:26:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.130 17:26:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.130 17:26:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.130 17:26:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.130 17:26:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.130 17:26:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.130 17:26:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.130 17:26:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.130 17:26:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:32.130 17:26:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:32.130 17:26:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.130 17:26:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.130 17:26:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.130 17:26:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.130 17:26:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:32.130 17:26:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.130 17:26:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.130 17:26:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.130 17:26:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.130 17:26:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.130 17:26:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.130 17:26:58 -- paths/export.sh@5 -- # export PATH 00:10:32.130 17:26:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.130 17:26:58 -- nvmf/common.sh@47 -- # : 0 00:10:32.130 17:26:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:32.130 17:26:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:32.130 17:26:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.130 17:26:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.130 17:26:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.130 17:26:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:32.130 17:26:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:32.130 17:26:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:32.130 17:26:58 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:32.130 17:26:58 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:32.130 17:26:58 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:10:32.130 17:26:58 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:32.130 17:26:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.130 17:26:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:32.130 17:26:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:32.130 17:26:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:32.130 17:26:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.130 17:26:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.130 17:26:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.130 17:26:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:32.130 17:26:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:32.130 17:26:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:32.130 17:26:58 -- common/autotest_common.sh@10 -- # set +x 00:10:34.034 17:27:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:34.034 17:27:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:34.034 17:27:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:34.034 17:27:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:34.034 17:27:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:34.034 17:27:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:34.034 17:27:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:34.034 17:27:00 -- nvmf/common.sh@295 -- # net_devs=() 00:10:34.034 17:27:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:34.034 17:27:00 -- nvmf/common.sh@296 -- # e810=() 00:10:34.034 17:27:00 -- nvmf/common.sh@296 -- # local -ga e810 00:10:34.034 17:27:00 -- nvmf/common.sh@297 -- # x722=() 00:10:34.034 17:27:00 -- nvmf/common.sh@297 -- # local -ga x722 00:10:34.034 17:27:00 -- nvmf/common.sh@298 -- # mlx=() 00:10:34.034 17:27:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:34.034 17:27:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.034 17:27:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.034 17:27:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.034 17:27:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.034 17:27:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.034 17:27:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.034 17:27:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.034 17:27:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.034 17:27:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.034 17:27:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.034 17:27:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.034 17:27:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:34.034 17:27:00 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:34.034 17:27:00 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:34.034 17:27:00 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:34.034 17:27:00 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:34.034 17:27:00 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:34.034 17:27:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:34.034 17:27:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.034 17:27:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:10:34.034 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:10:34.034 17:27:00 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:34.034 17:27:00 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:34.034 17:27:00 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:34.034 17:27:00 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:34.034 17:27:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.034 17:27:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:10:34.034 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:10:34.034 17:27:00 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:34.034 17:27:00 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:34.034 17:27:00 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:10:34.034 17:27:00 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:34.034 17:27:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:34.034 17:27:00 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:34.034 17:27:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.034 17:27:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.034 17:27:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:34.035 17:27:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.035 17:27:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:10:34.035 Found net devices under 0000:09:00.0: mlx_0_0 00:10:34.035 17:27:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.035 17:27:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.035 17:27:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.035 17:27:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:34.035 17:27:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.035 17:27:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:10:34.035 Found net devices under 0000:09:00.1: mlx_0_1 00:10:34.035 17:27:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.035 17:27:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:34.035 17:27:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:34.035 17:27:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:34.035 17:27:00 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:34.035 17:27:00 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:34.035 17:27:00 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:34.035 17:27:00 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:34.035 17:27:00 -- nvmf/common.sh@58 -- # uname 00:10:34.035 17:27:00 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:34.035 17:27:00 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:34.035 17:27:00 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:34.035 17:27:00 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:34.035 17:27:00 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:34.035 17:27:00 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:34.035 17:27:00 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:34.035 17:27:00 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:34.035 17:27:00 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:34.035 17:27:00 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:34.035 17:27:00 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:34.035 17:27:00 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:34.035 17:27:00 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:34.035 17:27:00 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:34.035 17:27:00 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:34.035 17:27:00 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:34.035 17:27:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:34.035 17:27:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.035 17:27:00 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:34.035 17:27:00 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:34.035 17:27:00 -- nvmf/common.sh@105 -- # continue 2 00:10:34.035 17:27:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:34.035 17:27:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.035 17:27:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:34.035 17:27:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.035 17:27:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:34.035 17:27:00 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:34.035 17:27:00 -- nvmf/common.sh@105 -- # continue 2 00:10:34.035 17:27:00 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:34.035 17:27:00 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:34.035 17:27:00 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:34.035 17:27:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:34.035 17:27:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:34.035 17:27:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:34.035 17:27:00 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:34.035 17:27:00 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:34.035 17:27:00 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:34.035 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:34.035 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:10:34.035 altname enp9s0f0np0 00:10:34.035 inet 192.168.100.8/24 scope global mlx_0_0 00:10:34.035 valid_lft forever preferred_lft forever 00:10:34.035 17:27:00 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:34.035 17:27:00 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:34.035 17:27:00 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:34.035 17:27:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:34.035 17:27:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:34.035 17:27:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:34.035 17:27:00 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:34.035 17:27:00 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:34.035 17:27:00 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:34.035 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:34.035 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:10:34.035 altname enp9s0f1np1 00:10:34.035 inet 192.168.100.9/24 scope global mlx_0_1 00:10:34.035 valid_lft forever preferred_lft forever 00:10:34.035 17:27:00 -- nvmf/common.sh@411 -- # return 0 00:10:34.035 17:27:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:34.035 17:27:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:34.035 17:27:00 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:34.035 17:27:00 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:34.035 17:27:00 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:34.035 17:27:00 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:34.035 17:27:00 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:34.035 17:27:00 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:34.035 17:27:00 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:34.035 17:27:00 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:34.035 17:27:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:34.035 17:27:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.035 17:27:00 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:34.035 17:27:00 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:34.035 17:27:00 -- nvmf/common.sh@105 -- # continue 2 00:10:34.035 17:27:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:34.035 17:27:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.035 17:27:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:34.035 17:27:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.035 17:27:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:34.035 17:27:00 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:34.035 17:27:00 -- nvmf/common.sh@105 -- # continue 2 00:10:34.035 17:27:00 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:34.035 17:27:00 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:34.035 17:27:00 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:34.035 17:27:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:34.035 17:27:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:34.035 17:27:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:34.035 17:27:00 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:34.035 17:27:00 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:34.035 17:27:00 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:34.035 17:27:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:34.035 17:27:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:34.035 17:27:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:34.035 17:27:00 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:34.035 192.168.100.9' 00:10:34.035 17:27:00 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:34.035 192.168.100.9' 00:10:34.035 17:27:00 -- nvmf/common.sh@446 -- # head -n 1 00:10:34.035 17:27:00 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:34.035 17:27:00 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:34.035 192.168.100.9' 00:10:34.035 17:27:00 -- nvmf/common.sh@447 -- # tail -n +2 00:10:34.035 17:27:00 -- nvmf/common.sh@447 -- # head -n 1 00:10:34.035 17:27:00 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:34.035 17:27:00 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:34.035 17:27:00 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:34.035 17:27:00 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:34.035 17:27:00 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:34.035 17:27:00 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:34.035 17:27:00 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:10:34.035 17:27:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:34.035 17:27:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:34.035 17:27:00 -- common/autotest_common.sh@10 -- # set +x 00:10:34.035 17:27:00 -- nvmf/common.sh@470 -- # nvmfpid=1971221 00:10:34.035 17:27:00 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:34.035 17:27:00 -- nvmf/common.sh@471 -- # waitforlisten 1971221 00:10:34.035 17:27:00 -- common/autotest_common.sh@817 -- # '[' -z 1971221 ']' 00:10:34.035 17:27:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.035 17:27:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:34.035 17:27:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.035 17:27:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:34.035 17:27:00 -- common/autotest_common.sh@10 -- # set +x 00:10:34.035 [2024-04-18 17:27:00.405181] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:10:34.035 [2024-04-18 17:27:00.405277] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.035 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.035 [2024-04-18 17:27:00.467244] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.294 [2024-04-18 17:27:00.585687] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.294 [2024-04-18 17:27:00.585750] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.294 [2024-04-18 17:27:00.585767] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.294 [2024-04-18 17:27:00.585780] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.294 [2024-04-18 17:27:00.585791] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.294 [2024-04-18 17:27:00.585828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.294 17:27:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:34.294 17:27:00 -- common/autotest_common.sh@850 -- # return 0 00:10:34.294 17:27:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:34.294 17:27:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:34.294 17:27:00 -- common/autotest_common.sh@10 -- # set +x 00:10:34.294 17:27:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:34.294 17:27:00 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:34.552 [2024-04-18 17:27:00.986097] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x194e9b0/0x1952ea0) succeed. 00:10:34.552 [2024-04-18 17:27:00.997972] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x194feb0/0x1994530) succeed. 00:10:34.552 17:27:01 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:10:34.552 17:27:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:34.552 17:27:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:34.552 17:27:01 -- common/autotest_common.sh@10 -- # set +x 00:10:34.810 ************************************ 00:10:34.810 START TEST lvs_grow_clean 00:10:34.810 ************************************ 00:10:34.810 17:27:01 -- common/autotest_common.sh@1111 -- # lvs_grow 00:10:34.810 17:27:01 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:34.810 17:27:01 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:34.810 17:27:01 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:34.810 17:27:01 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:34.810 17:27:01 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:34.810 17:27:01 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:34.810 17:27:01 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:34.810 17:27:01 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:34.810 17:27:01 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:35.067 17:27:01 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:35.067 17:27:01 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:35.327 17:27:01 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a78f37ea-5aa4-4fbb-bc48-88e52abeae32 00:10:35.327 17:27:01 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a78f37ea-5aa4-4fbb-bc48-88e52abeae32 00:10:35.327 17:27:01 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:35.585 17:27:01 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:35.585 17:27:01 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:35.585 17:27:01 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a78f37ea-5aa4-4fbb-bc48-88e52abeae32 lvol 150 00:10:35.844 17:27:02 -- target/nvmf_lvs_grow.sh@33 -- # lvol=039a64d2-4c1a-49d0-8d84-a43f005ee56e 00:10:35.844 17:27:02 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:35.844 17:27:02 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:36.104 [2024-04-18 17:27:02.423637] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:36.104 [2024-04-18 17:27:02.423725] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:36.104 true 00:10:36.104 17:27:02 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a78f37ea-5aa4-4fbb-bc48-88e52abeae32 00:10:36.104 17:27:02 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:36.363 17:27:02 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:36.363 17:27:02 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:36.621 17:27:02 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 039a64d2-4c1a-49d0-8d84-a43f005ee56e 00:10:36.881 17:27:03 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:37.140 [2024-04-18 17:27:03.439052] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:37.140 17:27:03 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:37.399 17:27:03 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1971744 00:10:37.399 17:27:03 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:37.399 17:27:03 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:37.399 17:27:03 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1971744 /var/tmp/bdevperf.sock 00:10:37.399 17:27:03 -- common/autotest_common.sh@817 -- # '[' -z 1971744 ']' 00:10:37.399 17:27:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:37.399 17:27:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:37.399 17:27:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:37.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:37.399 17:27:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:37.399 17:27:03 -- common/autotest_common.sh@10 -- # set +x 00:10:37.399 [2024-04-18 17:27:03.732605] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:10:37.399 [2024-04-18 17:27:03.732694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971744 ] 00:10:37.399 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.399 [2024-04-18 17:27:03.793956] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.399 [2024-04-18 17:27:03.909543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.658 17:27:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:37.658 17:27:04 -- common/autotest_common.sh@850 -- # return 0 00:10:37.658 17:27:04 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:37.916 Nvme0n1 00:10:37.916 17:27:04 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:38.176 [ 00:10:38.176 { 00:10:38.176 "name": "Nvme0n1", 00:10:38.176 "aliases": [ 00:10:38.176 "039a64d2-4c1a-49d0-8d84-a43f005ee56e" 00:10:38.176 ], 00:10:38.176 "product_name": "NVMe disk", 00:10:38.176 "block_size": 4096, 00:10:38.176 "num_blocks": 38912, 00:10:38.176 "uuid": "039a64d2-4c1a-49d0-8d84-a43f005ee56e", 00:10:38.176 "assigned_rate_limits": { 00:10:38.176 "rw_ios_per_sec": 0, 00:10:38.176 "rw_mbytes_per_sec": 0, 00:10:38.176 "r_mbytes_per_sec": 0, 00:10:38.176 "w_mbytes_per_sec": 0 00:10:38.176 }, 00:10:38.176 "claimed": false, 00:10:38.176 "zoned": false, 00:10:38.176 "supported_io_types": { 00:10:38.176 "read": true, 00:10:38.176 "write": true, 00:10:38.176 "unmap": true, 00:10:38.176 "write_zeroes": true, 00:10:38.176 "flush": true, 00:10:38.176 "reset": true, 00:10:38.176 "compare": true, 00:10:38.176 "compare_and_write": true, 00:10:38.176 "abort": true, 00:10:38.176 "nvme_admin": true, 00:10:38.176 "nvme_io": true 00:10:38.176 }, 00:10:38.176 "memory_domains": [ 00:10:38.176 { 00:10:38.176 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:10:38.176 "dma_device_type": 0 00:10:38.176 } 00:10:38.176 ], 00:10:38.176 "driver_specific": { 00:10:38.176 "nvme": [ 00:10:38.176 { 00:10:38.176 "trid": { 00:10:38.176 "trtype": "RDMA", 00:10:38.176 "adrfam": "IPv4", 00:10:38.176 "traddr": "192.168.100.8", 00:10:38.176 "trsvcid": "4420", 00:10:38.176 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:38.176 }, 00:10:38.176 "ctrlr_data": { 00:10:38.176 "cntlid": 1, 00:10:38.176 "vendor_id": "0x8086", 00:10:38.176 "model_number": "SPDK bdev Controller", 00:10:38.176 "serial_number": "SPDK0", 00:10:38.176 "firmware_revision": "24.05", 00:10:38.176 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:38.176 "oacs": { 00:10:38.176 "security": 0, 00:10:38.176 "format": 0, 00:10:38.176 "firmware": 0, 00:10:38.176 "ns_manage": 0 00:10:38.176 }, 00:10:38.176 "multi_ctrlr": true, 00:10:38.176 "ana_reporting": false 00:10:38.176 }, 00:10:38.176 "vs": { 00:10:38.176 "nvme_version": "1.3" 00:10:38.176 }, 00:10:38.176 "ns_data": { 00:10:38.176 "id": 1, 00:10:38.176 "can_share": true 00:10:38.176 } 00:10:38.176 } 00:10:38.176 ], 00:10:38.176 "mp_policy": "active_passive" 00:10:38.176 } 00:10:38.176 } 00:10:38.176 ] 00:10:38.176 17:27:04 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1971877 00:10:38.176 17:27:04 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:38.176 17:27:04 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:38.436 Running I/O for 10 seconds... 00:10:39.375 Latency(us) 00:10:39.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.375 Nvme0n1 : 1.00 22848.00 89.25 0.00 0.00 0.00 0.00 0.00 00:10:39.375 =================================================================================================================== 00:10:39.375 Total : 22848.00 89.25 0.00 0.00 0.00 0.00 0.00 00:10:39.375 00:10:40.314 17:27:06 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a78f37ea-5aa4-4fbb-bc48-88e52abeae32 00:10:40.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.315 Nvme0n1 : 2.00 22848.00 89.25 0.00 0.00 0.00 0.00 0.00 00:10:40.315 =================================================================================================================== 00:10:40.315 Total : 22848.00 89.25 0.00 0.00 0.00 0.00 0.00 00:10:40.315 00:10:40.572 true 00:10:40.572 17:27:06 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a78f37ea-5aa4-4fbb-bc48-88e52abeae32 00:10:40.572 17:27:06 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:40.838 17:27:07 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:40.838 17:27:07 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:40.838 17:27:07 -- target/nvmf_lvs_grow.sh@65 -- # wait 1971877 00:10:41.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.411 Nvme0n1 : 3.00 23148.00 90.42 0.00 0.00 0.00 0.00 0.00 00:10:41.411 =================================================================================================================== 00:10:41.411 Total : 23148.00 90.42 0.00 0.00 0.00 0.00 0.00 00:10:41.411 00:10:42.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.424 Nvme0n1 : 4.00 23112.50 90.28 0.00 0.00 0.00 0.00 0.00 00:10:42.424 =================================================================================================================== 00:10:42.424 Total : 23112.50 90.28 0.00 0.00 0.00 0.00 0.00 00:10:42.424 00:10:43.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.357 Nvme0n1 : 5.00 23295.60 91.00 0.00 0.00 0.00 0.00 0.00 00:10:43.357 =================================================================================================================== 00:10:43.357 Total : 23295.60 91.00 0.00 0.00 0.00 0.00 0.00 00:10:43.357 00:10:44.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.297 Nvme0n1 : 6.00 23296.00 91.00 0.00 0.00 0.00 0.00 0.00 00:10:44.297 =================================================================================================================== 00:10:44.297 Total : 23296.00 91.00 0.00 0.00 0.00 0.00 0.00 00:10:44.297 00:10:45.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.673 Nvme0n1 : 7.00 23461.14 91.65 0.00 0.00 0.00 0.00 0.00 00:10:45.673 =================================================================================================================== 00:10:45.673 Total : 23461.14 91.65 0.00 0.00 0.00 0.00 0.00 00:10:45.673 00:10:46.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.610 Nvme0n1 : 8.00 23447.88 91.59 0.00 0.00 0.00 0.00 0.00 00:10:46.610 =================================================================================================================== 00:10:46.610 Total : 23447.88 91.59 0.00 0.00 0.00 0.00 0.00 00:10:46.610 00:10:47.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:47.548 Nvme0n1 : 9.00 23431.00 91.53 0.00 0.00 0.00 0.00 0.00 00:10:47.548 =================================================================================================================== 00:10:47.548 Total : 23431.00 91.53 0.00 0.00 0.00 0.00 0.00 00:10:47.548 00:10:48.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:48.485 Nvme0n1 : 10.00 23417.90 91.48 0.00 0.00 0.00 0.00 0.00 00:10:48.485 =================================================================================================================== 00:10:48.485 Total : 23417.90 91.48 0.00 0.00 0.00 0.00 0.00 00:10:48.485 00:10:48.485 00:10:48.485 Latency(us) 00:10:48.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:48.486 Nvme0n1 : 10.00 23419.44 91.48 0.00 0.00 5461.00 3665.16 19126.80 00:10:48.486 =================================================================================================================== 00:10:48.486 Total : 23419.44 91.48 0.00 0.00 5461.00 3665.16 19126.80 00:10:48.486 0 00:10:48.486 17:27:14 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1971744 00:10:48.486 17:27:14 -- common/autotest_common.sh@936 -- # '[' -z 1971744 ']' 00:10:48.486 17:27:14 -- common/autotest_common.sh@940 -- # kill -0 1971744 00:10:48.486 17:27:14 -- common/autotest_common.sh@941 -- # uname 00:10:48.486 17:27:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:48.486 17:27:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1971744 00:10:48.486 17:27:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:48.486 17:27:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:48.486 17:27:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1971744' 00:10:48.486 killing process with pid 1971744 00:10:48.486 17:27:14 -- common/autotest_common.sh@955 -- # kill 1971744 00:10:48.486 Received shutdown signal, test time was about 10.000000 seconds 00:10:48.486 00:10:48.486 Latency(us) 00:10:48.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.486 =================================================================================================================== 00:10:48.486 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:48.486 17:27:14 -- common/autotest_common.sh@960 -- # wait 1971744 00:10:48.744 17:27:15 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:49.001 17:27:15 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a78f37ea-5aa4-4fbb-bc48-88e52abeae32 00:10:49.001 17:27:15 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:10:49.260 17:27:15 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:10:49.260 17:27:15 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:10:49.260 17:27:15 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:49.519 [2024-04-18 17:27:15.875230] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:49.519 17:27:15 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a78f37ea-5aa4-4fbb-bc48-88e52abeae32 00:10:49.519 17:27:15 -- common/autotest_common.sh@638 -- # local es=0 00:10:49.519 17:27:15 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a78f37ea-5aa4-4fbb-bc48-88e52abeae32 00:10:49.519 17:27:15 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:49.519 17:27:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:49.519 17:27:15 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:49.519 17:27:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:49.519 17:27:15 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:49.519 17:27:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:49.519 17:27:15 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:49.519 17:27:15 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:10:49.519 17:27:15 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a78f37ea-5aa4-4fbb-bc48-88e52abeae32 00:10:49.777 request: 00:10:49.777 { 00:10:49.778 "uuid": "a78f37ea-5aa4-4fbb-bc48-88e52abeae32", 00:10:49.778 "method": "bdev_lvol_get_lvstores", 00:10:49.778 "req_id": 1 00:10:49.778 } 00:10:49.778 Got JSON-RPC error response 00:10:49.778 response: 00:10:49.778 { 00:10:49.778 "code": -19, 00:10:49.778 "message": "No such device" 00:10:49.778 } 00:10:49.778 17:27:16 -- common/autotest_common.sh@641 -- # es=1 00:10:49.778 17:27:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:49.778 17:27:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:49.778 17:27:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:49.778 17:27:16 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:50.035 aio_bdev 00:10:50.035 17:27:16 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 039a64d2-4c1a-49d0-8d84-a43f005ee56e 00:10:50.035 17:27:16 -- common/autotest_common.sh@885 -- # local bdev_name=039a64d2-4c1a-49d0-8d84-a43f005ee56e 00:10:50.035 17:27:16 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:50.035 17:27:16 -- common/autotest_common.sh@887 -- # local i 00:10:50.035 17:27:16 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:50.035 17:27:16 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:50.035 17:27:16 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:50.293 17:27:16 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 039a64d2-4c1a-49d0-8d84-a43f005ee56e -t 2000 00:10:50.552 [ 00:10:50.552 { 00:10:50.552 "name": "039a64d2-4c1a-49d0-8d84-a43f005ee56e", 00:10:50.552 "aliases": [ 00:10:50.552 "lvs/lvol" 00:10:50.552 ], 00:10:50.552 "product_name": "Logical Volume", 00:10:50.552 "block_size": 4096, 00:10:50.552 "num_blocks": 38912, 00:10:50.552 "uuid": "039a64d2-4c1a-49d0-8d84-a43f005ee56e", 00:10:50.552 "assigned_rate_limits": { 00:10:50.552 "rw_ios_per_sec": 0, 00:10:50.552 "rw_mbytes_per_sec": 0, 00:10:50.552 "r_mbytes_per_sec": 0, 00:10:50.553 "w_mbytes_per_sec": 0 00:10:50.553 }, 00:10:50.553 "claimed": false, 00:10:50.553 "zoned": false, 00:10:50.553 "supported_io_types": { 00:10:50.553 "read": true, 00:10:50.553 "write": true, 00:10:50.553 "unmap": true, 00:10:50.553 "write_zeroes": true, 00:10:50.553 "flush": false, 00:10:50.553 "reset": true, 00:10:50.553 "compare": false, 00:10:50.553 "compare_and_write": false, 00:10:50.553 "abort": false, 00:10:50.553 "nvme_admin": false, 00:10:50.553 "nvme_io": false 00:10:50.553 }, 00:10:50.553 "driver_specific": { 00:10:50.553 "lvol": { 00:10:50.553 "lvol_store_uuid": "a78f37ea-5aa4-4fbb-bc48-88e52abeae32", 00:10:50.553 "base_bdev": "aio_bdev", 00:10:50.553 "thin_provision": false, 00:10:50.553 "snapshot": false, 00:10:50.553 "clone": false, 00:10:50.553 "esnap_clone": false 00:10:50.553 } 00:10:50.553 } 00:10:50.553 } 00:10:50.553 ] 00:10:50.553 17:27:16 -- common/autotest_common.sh@893 -- # return 0 00:10:50.553 17:27:16 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a78f37ea-5aa4-4fbb-bc48-88e52abeae32 00:10:50.553 17:27:16 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:10:50.847 17:27:17 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:10:50.847 17:27:17 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a78f37ea-5aa4-4fbb-bc48-88e52abeae32 00:10:50.847 17:27:17 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:10:50.847 17:27:17 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:10:50.847 17:27:17 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 039a64d2-4c1a-49d0-8d84-a43f005ee56e 00:10:51.105 17:27:17 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a78f37ea-5aa4-4fbb-bc48-88e52abeae32 00:10:51.363 17:27:17 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:51.622 17:27:18 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:51.622 00:10:51.622 real 0m16.969s 00:10:51.622 user 0m17.060s 00:10:51.622 sys 0m1.261s 00:10:51.622 17:27:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:51.622 17:27:18 -- common/autotest_common.sh@10 -- # set +x 00:10:51.622 ************************************ 00:10:51.622 END TEST lvs_grow_clean 00:10:51.622 ************************************ 00:10:51.622 17:27:18 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:51.622 17:27:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:51.622 17:27:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:51.622 17:27:18 -- common/autotest_common.sh@10 -- # set +x 00:10:51.880 ************************************ 00:10:51.880 START TEST lvs_grow_dirty 00:10:51.880 ************************************ 00:10:51.880 17:27:18 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:10:51.880 17:27:18 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:51.880 17:27:18 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:51.880 17:27:18 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:51.880 17:27:18 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:51.880 17:27:18 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:51.880 17:27:18 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:51.880 17:27:18 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:51.880 17:27:18 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:51.880 17:27:18 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:52.139 17:27:18 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:52.139 17:27:18 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:52.398 17:27:18 -- target/nvmf_lvs_grow.sh@28 -- # lvs=1423a0ff-203e-461c-8456-95e32ceaf89e 00:10:52.398 17:27:18 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1423a0ff-203e-461c-8456-95e32ceaf89e 00:10:52.398 17:27:18 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:52.657 17:27:18 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:52.657 17:27:18 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:52.657 17:27:18 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1423a0ff-203e-461c-8456-95e32ceaf89e lvol 150 00:10:52.915 17:27:19 -- target/nvmf_lvs_grow.sh@33 -- # lvol=094bd9b7-55d2-4366-a3f0-4fd38f152d7a 00:10:52.915 17:27:19 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:52.915 17:27:19 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:53.174 [2024-04-18 17:27:19.495682] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:53.174 [2024-04-18 17:27:19.495774] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:53.174 true 00:10:53.174 17:27:19 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1423a0ff-203e-461c-8456-95e32ceaf89e 00:10:53.174 17:27:19 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:53.434 17:27:19 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:53.434 17:27:19 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:53.693 17:27:19 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 094bd9b7-55d2-4366-a3f0-4fd38f152d7a 00:10:53.952 17:27:20 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:53.952 17:27:20 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:54.211 17:27:20 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1974311 00:10:54.211 17:27:20 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:54.211 17:27:20 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:54.211 17:27:20 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1974311 /var/tmp/bdevperf.sock 00:10:54.211 17:27:20 -- common/autotest_common.sh@817 -- # '[' -z 1974311 ']' 00:10:54.211 17:27:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:54.211 17:27:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:54.211 17:27:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:54.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:54.211 17:27:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:54.211 17:27:20 -- common/autotest_common.sh@10 -- # set +x 00:10:54.469 [2024-04-18 17:27:20.763496] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:10:54.469 [2024-04-18 17:27:20.763582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974311 ] 00:10:54.469 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.469 [2024-04-18 17:27:20.823565] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.469 [2024-04-18 17:27:20.936855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.726 17:27:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:54.726 17:27:21 -- common/autotest_common.sh@850 -- # return 0 00:10:54.726 17:27:21 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:54.985 Nvme0n1 00:10:54.985 17:27:21 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:55.244 [ 00:10:55.244 { 00:10:55.244 "name": "Nvme0n1", 00:10:55.244 "aliases": [ 00:10:55.244 "094bd9b7-55d2-4366-a3f0-4fd38f152d7a" 00:10:55.244 ], 00:10:55.244 "product_name": "NVMe disk", 00:10:55.244 "block_size": 4096, 00:10:55.244 "num_blocks": 38912, 00:10:55.244 "uuid": "094bd9b7-55d2-4366-a3f0-4fd38f152d7a", 00:10:55.244 "assigned_rate_limits": { 00:10:55.244 "rw_ios_per_sec": 0, 00:10:55.244 "rw_mbytes_per_sec": 0, 00:10:55.244 "r_mbytes_per_sec": 0, 00:10:55.244 "w_mbytes_per_sec": 0 00:10:55.244 }, 00:10:55.244 "claimed": false, 00:10:55.244 "zoned": false, 00:10:55.244 "supported_io_types": { 00:10:55.244 "read": true, 00:10:55.244 "write": true, 00:10:55.244 "unmap": true, 00:10:55.244 "write_zeroes": true, 00:10:55.244 "flush": true, 00:10:55.244 "reset": true, 00:10:55.244 "compare": true, 00:10:55.244 "compare_and_write": true, 00:10:55.244 "abort": true, 00:10:55.244 "nvme_admin": true, 00:10:55.244 "nvme_io": true 00:10:55.244 }, 00:10:55.244 "memory_domains": [ 00:10:55.244 { 00:10:55.244 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:10:55.245 "dma_device_type": 0 00:10:55.245 } 00:10:55.245 ], 00:10:55.245 "driver_specific": { 00:10:55.245 "nvme": [ 00:10:55.245 { 00:10:55.245 "trid": { 00:10:55.245 "trtype": "RDMA", 00:10:55.245 "adrfam": "IPv4", 00:10:55.245 "traddr": "192.168.100.8", 00:10:55.245 "trsvcid": "4420", 00:10:55.245 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:55.245 }, 00:10:55.245 "ctrlr_data": { 00:10:55.245 "cntlid": 1, 00:10:55.245 "vendor_id": "0x8086", 00:10:55.245 "model_number": "SPDK bdev Controller", 00:10:55.245 "serial_number": "SPDK0", 00:10:55.245 "firmware_revision": "24.05", 00:10:55.245 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:55.245 "oacs": { 00:10:55.245 "security": 0, 00:10:55.245 "format": 0, 00:10:55.245 "firmware": 0, 00:10:55.245 "ns_manage": 0 00:10:55.245 }, 00:10:55.245 "multi_ctrlr": true, 00:10:55.245 "ana_reporting": false 00:10:55.245 }, 00:10:55.245 "vs": { 00:10:55.245 "nvme_version": "1.3" 00:10:55.245 }, 00:10:55.245 "ns_data": { 00:10:55.245 "id": 1, 00:10:55.245 "can_share": true 00:10:55.245 } 00:10:55.245 } 00:10:55.245 ], 00:10:55.245 "mp_policy": "active_passive" 00:10:55.245 } 00:10:55.245 } 00:10:55.245 ] 00:10:55.245 17:27:21 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1974443 00:10:55.245 17:27:21 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:55.245 17:27:21 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:55.245 Running I/O for 10 seconds... 00:10:56.622 Latency(us) 00:10:56.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:56.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:56.622 Nvme0n1 : 1.00 21825.00 85.25 0.00 0.00 0.00 0.00 0.00 00:10:56.622 =================================================================================================================== 00:10:56.622 Total : 21825.00 85.25 0.00 0.00 0.00 0.00 0.00 00:10:56.622 00:10:57.188 17:27:23 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1423a0ff-203e-461c-8456-95e32ceaf89e 00:10:57.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.445 Nvme0n1 : 2.00 22688.00 88.62 0.00 0.00 0.00 0.00 0.00 00:10:57.445 =================================================================================================================== 00:10:57.445 Total : 22688.00 88.62 0.00 0.00 0.00 0.00 0.00 00:10:57.445 00:10:57.445 true 00:10:57.445 17:27:23 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1423a0ff-203e-461c-8456-95e32ceaf89e 00:10:57.445 17:27:23 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:57.705 17:27:24 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:57.705 17:27:24 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:57.705 17:27:24 -- target/nvmf_lvs_grow.sh@65 -- # wait 1974443 00:10:58.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:58.271 Nvme0n1 : 3.00 22826.67 89.17 0.00 0.00 0.00 0.00 0.00 00:10:58.271 =================================================================================================================== 00:10:58.271 Total : 22826.67 89.17 0.00 0.00 0.00 0.00 0.00 00:10:58.271 00:10:59.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.653 Nvme0n1 : 4.00 23135.75 90.37 0.00 0.00 0.00 0.00 0.00 00:10:59.653 =================================================================================================================== 00:10:59.653 Total : 23135.75 90.37 0.00 0.00 0.00 0.00 0.00 00:10:59.653 00:11:00.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:00.283 Nvme0n1 : 5.00 23418.20 91.48 0.00 0.00 0.00 0.00 0.00 00:11:00.283 =================================================================================================================== 00:11:00.283 Total : 23418.20 91.48 0.00 0.00 0.00 0.00 0.00 00:11:00.283 00:11:01.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.512 Nvme0n1 : 6.00 23534.83 91.93 0.00 0.00 0.00 0.00 0.00 00:11:01.512 =================================================================================================================== 00:11:01.512 Total : 23534.83 91.93 0.00 0.00 0.00 0.00 0.00 00:11:01.512 00:11:02.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:02.449 Nvme0n1 : 7.00 23564.86 92.05 0.00 0.00 0.00 0.00 0.00 00:11:02.449 =================================================================================================================== 00:11:02.449 Total : 23564.86 92.05 0.00 0.00 0.00 0.00 0.00 00:11:02.449 00:11:03.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:03.388 Nvme0n1 : 8.00 23696.75 92.57 0.00 0.00 0.00 0.00 0.00 00:11:03.388 =================================================================================================================== 00:11:03.388 Total : 23696.75 92.57 0.00 0.00 0.00 0.00 0.00 00:11:03.388 00:11:04.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:04.327 Nvme0n1 : 9.00 23615.44 92.25 0.00 0.00 0.00 0.00 0.00 00:11:04.327 =================================================================================================================== 00:11:04.327 Total : 23615.44 92.25 0.00 0.00 0.00 0.00 0.00 00:11:04.327 00:11:05.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:05.264 Nvme0n1 : 10.00 23589.70 92.15 0.00 0.00 0.00 0.00 0.00 00:11:05.264 =================================================================================================================== 00:11:05.264 Total : 23589.70 92.15 0.00 0.00 0.00 0.00 0.00 00:11:05.264 00:11:05.264 00:11:05.264 Latency(us) 00:11:05.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:05.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:05.264 Nvme0n1 : 10.01 23591.96 92.16 0.00 0.00 5421.01 3592.34 21068.61 00:11:05.264 =================================================================================================================== 00:11:05.264 Total : 23591.96 92.16 0.00 0.00 5421.01 3592.34 21068.61 00:11:05.264 0 00:11:05.525 17:27:31 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1974311 00:11:05.525 17:27:31 -- common/autotest_common.sh@936 -- # '[' -z 1974311 ']' 00:11:05.525 17:27:31 -- common/autotest_common.sh@940 -- # kill -0 1974311 00:11:05.525 17:27:31 -- common/autotest_common.sh@941 -- # uname 00:11:05.525 17:27:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:05.525 17:27:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1974311 00:11:05.525 17:27:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:05.525 17:27:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:05.525 17:27:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1974311' 00:11:05.525 killing process with pid 1974311 00:11:05.525 17:27:31 -- common/autotest_common.sh@955 -- # kill 1974311 00:11:05.525 Received shutdown signal, test time was about 10.000000 seconds 00:11:05.525 00:11:05.525 Latency(us) 00:11:05.525 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:05.525 =================================================================================================================== 00:11:05.525 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:05.525 17:27:31 -- common/autotest_common.sh@960 -- # wait 1974311 00:11:05.784 17:27:32 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:06.043 17:27:32 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1423a0ff-203e-461c-8456-95e32ceaf89e 00:11:06.043 17:27:32 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:11:06.302 17:27:32 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:11:06.302 17:27:32 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:11:06.302 17:27:32 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1971221 00:11:06.302 17:27:32 -- target/nvmf_lvs_grow.sh@74 -- # wait 1971221 00:11:06.302 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1971221 Killed "${NVMF_APP[@]}" "$@" 00:11:06.302 17:27:32 -- target/nvmf_lvs_grow.sh@74 -- # true 00:11:06.302 17:27:32 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:11:06.302 17:27:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:06.302 17:27:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:06.302 17:27:32 -- common/autotest_common.sh@10 -- # set +x 00:11:06.302 17:27:32 -- nvmf/common.sh@470 -- # nvmfpid=1975659 00:11:06.302 17:27:32 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:06.302 17:27:32 -- nvmf/common.sh@471 -- # waitforlisten 1975659 00:11:06.302 17:27:32 -- common/autotest_common.sh@817 -- # '[' -z 1975659 ']' 00:11:06.302 17:27:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.302 17:27:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:06.302 17:27:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.302 17:27:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:06.302 17:27:32 -- common/autotest_common.sh@10 -- # set +x 00:11:06.302 [2024-04-18 17:27:32.738563] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:11:06.302 [2024-04-18 17:27:32.738655] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.302 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.302 [2024-04-18 17:27:32.798562] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.561 [2024-04-18 17:27:32.907414] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.561 [2024-04-18 17:27:32.907495] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.561 [2024-04-18 17:27:32.907524] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.561 [2024-04-18 17:27:32.907536] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.561 [2024-04-18 17:27:32.907546] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.561 [2024-04-18 17:27:32.907577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.561 17:27:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:06.561 17:27:33 -- common/autotest_common.sh@850 -- # return 0 00:11:06.561 17:27:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:06.561 17:27:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:06.561 17:27:33 -- common/autotest_common.sh@10 -- # set +x 00:11:06.561 17:27:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.561 17:27:33 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:06.828 [2024-04-18 17:27:33.299358] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:06.828 [2024-04-18 17:27:33.299499] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:06.828 [2024-04-18 17:27:33.299555] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:06.828 17:27:33 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:11:06.828 17:27:33 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 094bd9b7-55d2-4366-a3f0-4fd38f152d7a 00:11:06.828 17:27:33 -- common/autotest_common.sh@885 -- # local bdev_name=094bd9b7-55d2-4366-a3f0-4fd38f152d7a 00:11:06.828 17:27:33 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:06.828 17:27:33 -- common/autotest_common.sh@887 -- # local i 00:11:06.828 17:27:33 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:06.828 17:27:33 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:06.828 17:27:33 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:07.087 17:27:33 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 094bd9b7-55d2-4366-a3f0-4fd38f152d7a -t 2000 00:11:07.346 [ 00:11:07.346 { 00:11:07.346 "name": "094bd9b7-55d2-4366-a3f0-4fd38f152d7a", 00:11:07.346 "aliases": [ 00:11:07.346 "lvs/lvol" 00:11:07.346 ], 00:11:07.346 "product_name": "Logical Volume", 00:11:07.346 "block_size": 4096, 00:11:07.346 "num_blocks": 38912, 00:11:07.346 "uuid": "094bd9b7-55d2-4366-a3f0-4fd38f152d7a", 00:11:07.346 "assigned_rate_limits": { 00:11:07.346 "rw_ios_per_sec": 0, 00:11:07.346 "rw_mbytes_per_sec": 0, 00:11:07.346 "r_mbytes_per_sec": 0, 00:11:07.346 "w_mbytes_per_sec": 0 00:11:07.346 }, 00:11:07.346 "claimed": false, 00:11:07.346 "zoned": false, 00:11:07.346 "supported_io_types": { 00:11:07.346 "read": true, 00:11:07.346 "write": true, 00:11:07.346 "unmap": true, 00:11:07.346 "write_zeroes": true, 00:11:07.346 "flush": false, 00:11:07.346 "reset": true, 00:11:07.346 "compare": false, 00:11:07.346 "compare_and_write": false, 00:11:07.346 "abort": false, 00:11:07.346 "nvme_admin": false, 00:11:07.346 "nvme_io": false 00:11:07.346 }, 00:11:07.346 "driver_specific": { 00:11:07.346 "lvol": { 00:11:07.346 "lvol_store_uuid": "1423a0ff-203e-461c-8456-95e32ceaf89e", 00:11:07.346 "base_bdev": "aio_bdev", 00:11:07.346 "thin_provision": false, 00:11:07.346 "snapshot": false, 00:11:07.346 "clone": false, 00:11:07.346 "esnap_clone": false 00:11:07.346 } 00:11:07.346 } 00:11:07.346 } 00:11:07.346 ] 00:11:07.346 17:27:33 -- common/autotest_common.sh@893 -- # return 0 00:11:07.346 17:27:33 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1423a0ff-203e-461c-8456-95e32ceaf89e 00:11:07.346 17:27:33 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:11:07.606 17:27:34 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:11:07.606 17:27:34 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1423a0ff-203e-461c-8456-95e32ceaf89e 00:11:07.606 17:27:34 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:11:07.867 17:27:34 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:11:07.867 17:27:34 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:08.126 [2024-04-18 17:27:34.504163] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:08.126 17:27:34 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1423a0ff-203e-461c-8456-95e32ceaf89e 00:11:08.126 17:27:34 -- common/autotest_common.sh@638 -- # local es=0 00:11:08.126 17:27:34 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1423a0ff-203e-461c-8456-95e32ceaf89e 00:11:08.126 17:27:34 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:08.126 17:27:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:08.126 17:27:34 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:08.126 17:27:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:08.126 17:27:34 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:08.126 17:27:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:08.126 17:27:34 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:08.126 17:27:34 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:08.126 17:27:34 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1423a0ff-203e-461c-8456-95e32ceaf89e 00:11:08.384 request: 00:11:08.384 { 00:11:08.384 "uuid": "1423a0ff-203e-461c-8456-95e32ceaf89e", 00:11:08.384 "method": "bdev_lvol_get_lvstores", 00:11:08.384 "req_id": 1 00:11:08.384 } 00:11:08.384 Got JSON-RPC error response 00:11:08.384 response: 00:11:08.384 { 00:11:08.384 "code": -19, 00:11:08.384 "message": "No such device" 00:11:08.384 } 00:11:08.384 17:27:34 -- common/autotest_common.sh@641 -- # es=1 00:11:08.384 17:27:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:08.384 17:27:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:08.384 17:27:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:08.384 17:27:34 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:08.643 aio_bdev 00:11:08.643 17:27:35 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 094bd9b7-55d2-4366-a3f0-4fd38f152d7a 00:11:08.643 17:27:35 -- common/autotest_common.sh@885 -- # local bdev_name=094bd9b7-55d2-4366-a3f0-4fd38f152d7a 00:11:08.643 17:27:35 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:08.643 17:27:35 -- common/autotest_common.sh@887 -- # local i 00:11:08.643 17:27:35 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:08.643 17:27:35 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:08.643 17:27:35 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:08.903 17:27:35 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 094bd9b7-55d2-4366-a3f0-4fd38f152d7a -t 2000 00:11:09.163 [ 00:11:09.163 { 00:11:09.163 "name": "094bd9b7-55d2-4366-a3f0-4fd38f152d7a", 00:11:09.163 "aliases": [ 00:11:09.163 "lvs/lvol" 00:11:09.163 ], 00:11:09.163 "product_name": "Logical Volume", 00:11:09.163 "block_size": 4096, 00:11:09.163 "num_blocks": 38912, 00:11:09.163 "uuid": "094bd9b7-55d2-4366-a3f0-4fd38f152d7a", 00:11:09.163 "assigned_rate_limits": { 00:11:09.163 "rw_ios_per_sec": 0, 00:11:09.163 "rw_mbytes_per_sec": 0, 00:11:09.163 "r_mbytes_per_sec": 0, 00:11:09.163 "w_mbytes_per_sec": 0 00:11:09.163 }, 00:11:09.163 "claimed": false, 00:11:09.163 "zoned": false, 00:11:09.163 "supported_io_types": { 00:11:09.163 "read": true, 00:11:09.163 "write": true, 00:11:09.163 "unmap": true, 00:11:09.163 "write_zeroes": true, 00:11:09.163 "flush": false, 00:11:09.163 "reset": true, 00:11:09.163 "compare": false, 00:11:09.163 "compare_and_write": false, 00:11:09.163 "abort": false, 00:11:09.163 "nvme_admin": false, 00:11:09.163 "nvme_io": false 00:11:09.163 }, 00:11:09.163 "driver_specific": { 00:11:09.163 "lvol": { 00:11:09.163 "lvol_store_uuid": "1423a0ff-203e-461c-8456-95e32ceaf89e", 00:11:09.163 "base_bdev": "aio_bdev", 00:11:09.163 "thin_provision": false, 00:11:09.163 "snapshot": false, 00:11:09.163 "clone": false, 00:11:09.163 "esnap_clone": false 00:11:09.163 } 00:11:09.163 } 00:11:09.163 } 00:11:09.163 ] 00:11:09.163 17:27:35 -- common/autotest_common.sh@893 -- # return 0 00:11:09.163 17:27:35 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1423a0ff-203e-461c-8456-95e32ceaf89e 00:11:09.163 17:27:35 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:11:09.421 17:27:35 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:11:09.421 17:27:35 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1423a0ff-203e-461c-8456-95e32ceaf89e 00:11:09.421 17:27:35 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:11:09.679 17:27:36 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:11:09.679 17:27:36 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 094bd9b7-55d2-4366-a3f0-4fd38f152d7a 00:11:09.938 17:27:36 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1423a0ff-203e-461c-8456-95e32ceaf89e 00:11:10.198 17:27:36 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:10.198 17:27:36 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:10.458 00:11:10.458 real 0m18.490s 00:11:10.458 user 0m47.774s 00:11:10.458 sys 0m3.872s 00:11:10.458 17:27:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:10.458 17:27:36 -- common/autotest_common.sh@10 -- # set +x 00:11:10.458 ************************************ 00:11:10.458 END TEST lvs_grow_dirty 00:11:10.458 ************************************ 00:11:10.458 17:27:36 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:10.458 17:27:36 -- common/autotest_common.sh@794 -- # type=--id 00:11:10.458 17:27:36 -- common/autotest_common.sh@795 -- # id=0 00:11:10.458 17:27:36 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:11:10.458 17:27:36 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:10.458 17:27:36 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:11:10.458 17:27:36 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:11:10.458 17:27:36 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:11:10.458 17:27:36 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:10.458 nvmf_trace.0 00:11:10.458 17:27:36 -- common/autotest_common.sh@809 -- # return 0 00:11:10.458 17:27:36 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:10.458 17:27:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:10.458 17:27:36 -- nvmf/common.sh@117 -- # sync 00:11:10.458 17:27:36 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:10.458 17:27:36 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:10.458 17:27:36 -- nvmf/common.sh@120 -- # set +e 00:11:10.458 17:27:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:10.458 17:27:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:10.458 rmmod nvme_rdma 00:11:10.458 rmmod nvme_fabrics 00:11:10.458 17:27:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:10.458 17:27:36 -- nvmf/common.sh@124 -- # set -e 00:11:10.458 17:27:36 -- nvmf/common.sh@125 -- # return 0 00:11:10.458 17:27:36 -- nvmf/common.sh@478 -- # '[' -n 1975659 ']' 00:11:10.458 17:27:36 -- nvmf/common.sh@479 -- # killprocess 1975659 00:11:10.458 17:27:36 -- common/autotest_common.sh@936 -- # '[' -z 1975659 ']' 00:11:10.458 17:27:36 -- common/autotest_common.sh@940 -- # kill -0 1975659 00:11:10.458 17:27:36 -- common/autotest_common.sh@941 -- # uname 00:11:10.458 17:27:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:10.458 17:27:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1975659 00:11:10.458 17:27:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:10.458 17:27:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:10.458 17:27:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1975659' 00:11:10.458 killing process with pid 1975659 00:11:10.459 17:27:36 -- common/autotest_common.sh@955 -- # kill 1975659 00:11:10.459 17:27:36 -- common/autotest_common.sh@960 -- # wait 1975659 00:11:10.718 17:27:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:10.718 17:27:37 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:10.718 00:11:10.718 real 0m38.770s 00:11:10.718 user 1m10.357s 00:11:10.718 sys 0m6.935s 00:11:10.718 17:27:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:10.718 17:27:37 -- common/autotest_common.sh@10 -- # set +x 00:11:10.718 ************************************ 00:11:10.718 END TEST nvmf_lvs_grow 00:11:10.718 ************************************ 00:11:10.718 17:27:37 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:11:10.718 17:27:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:10.718 17:27:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.718 17:27:37 -- common/autotest_common.sh@10 -- # set +x 00:11:10.977 ************************************ 00:11:10.977 START TEST nvmf_bdev_io_wait 00:11:10.977 ************************************ 00:11:10.977 17:27:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:11:10.977 * Looking for test storage... 00:11:10.977 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:10.977 17:27:37 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.977 17:27:37 -- nvmf/common.sh@7 -- # uname -s 00:11:10.977 17:27:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.977 17:27:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.977 17:27:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.977 17:27:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.977 17:27:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.977 17:27:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.977 17:27:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.977 17:27:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.977 17:27:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.977 17:27:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.977 17:27:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:10.977 17:27:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:10.977 17:27:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.977 17:27:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.977 17:27:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.977 17:27:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.977 17:27:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:10.977 17:27:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.977 17:27:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.977 17:27:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.977 17:27:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.977 17:27:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.977 17:27:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.977 17:27:37 -- paths/export.sh@5 -- # export PATH 00:11:10.977 17:27:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.977 17:27:37 -- nvmf/common.sh@47 -- # : 0 00:11:10.977 17:27:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:10.977 17:27:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:10.977 17:27:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.977 17:27:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.977 17:27:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.977 17:27:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:10.977 17:27:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:10.977 17:27:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:10.977 17:27:37 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:10.977 17:27:37 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:10.977 17:27:37 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:10.977 17:27:37 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:10.977 17:27:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.977 17:27:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:10.978 17:27:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:10.978 17:27:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:10.978 17:27:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.978 17:27:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.978 17:27:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.978 17:27:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:10.978 17:27:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:10.978 17:27:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:10.978 17:27:37 -- common/autotest_common.sh@10 -- # set +x 00:11:12.883 17:27:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:12.883 17:27:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:12.883 17:27:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:12.883 17:27:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:12.883 17:27:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:12.883 17:27:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:12.883 17:27:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:12.883 17:27:39 -- nvmf/common.sh@295 -- # net_devs=() 00:11:12.883 17:27:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:12.883 17:27:39 -- nvmf/common.sh@296 -- # e810=() 00:11:12.883 17:27:39 -- nvmf/common.sh@296 -- # local -ga e810 00:11:12.883 17:27:39 -- nvmf/common.sh@297 -- # x722=() 00:11:12.883 17:27:39 -- nvmf/common.sh@297 -- # local -ga x722 00:11:12.883 17:27:39 -- nvmf/common.sh@298 -- # mlx=() 00:11:12.883 17:27:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:12.883 17:27:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.883 17:27:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.883 17:27:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.883 17:27:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.883 17:27:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.883 17:27:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.883 17:27:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.883 17:27:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.883 17:27:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.883 17:27:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.883 17:27:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.883 17:27:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:12.883 17:27:39 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:12.883 17:27:39 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:12.883 17:27:39 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:12.883 17:27:39 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:12.883 17:27:39 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:12.883 17:27:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:12.883 17:27:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.883 17:27:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:11:12.883 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:11:12.883 17:27:39 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:12.883 17:27:39 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:12.883 17:27:39 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:12.883 17:27:39 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:12.883 17:27:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.883 17:27:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:11:12.883 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:11:12.884 17:27:39 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:12.884 17:27:39 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:12.884 17:27:39 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:12.884 17:27:39 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:12.884 17:27:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:12.884 17:27:39 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:12.884 17:27:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.884 17:27:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.884 17:27:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:12.884 17:27:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.884 17:27:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:11:12.884 Found net devices under 0000:09:00.0: mlx_0_0 00:11:12.884 17:27:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.884 17:27:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.884 17:27:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.884 17:27:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:12.884 17:27:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.884 17:27:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:11:12.884 Found net devices under 0000:09:00.1: mlx_0_1 00:11:12.884 17:27:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.884 17:27:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:12.884 17:27:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:12.884 17:27:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:12.884 17:27:39 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:12.884 17:27:39 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:12.884 17:27:39 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:12.884 17:27:39 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:12.884 17:27:39 -- nvmf/common.sh@58 -- # uname 00:11:12.884 17:27:39 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:12.884 17:27:39 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:12.884 17:27:39 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:12.884 17:27:39 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:12.884 17:27:39 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:12.884 17:27:39 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:12.884 17:27:39 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:12.884 17:27:39 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:12.884 17:27:39 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:12.884 17:27:39 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:12.884 17:27:39 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:12.884 17:27:39 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:12.884 17:27:39 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:12.884 17:27:39 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:12.884 17:27:39 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:12.884 17:27:39 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:12.884 17:27:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:12.884 17:27:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.884 17:27:39 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:12.884 17:27:39 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:12.884 17:27:39 -- nvmf/common.sh@105 -- # continue 2 00:11:12.884 17:27:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:12.884 17:27:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.884 17:27:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:12.884 17:27:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.884 17:27:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:12.884 17:27:39 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:12.884 17:27:39 -- nvmf/common.sh@105 -- # continue 2 00:11:12.884 17:27:39 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:12.884 17:27:39 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:12.884 17:27:39 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:12.884 17:27:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:12.884 17:27:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:12.884 17:27:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:12.884 17:27:39 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:12.884 17:27:39 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:12.884 17:27:39 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:12.884 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:12.884 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:11:12.884 altname enp9s0f0np0 00:11:12.884 inet 192.168.100.8/24 scope global mlx_0_0 00:11:12.884 valid_lft forever preferred_lft forever 00:11:12.884 17:27:39 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:12.884 17:27:39 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:12.884 17:27:39 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:12.884 17:27:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:12.884 17:27:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:12.884 17:27:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:12.884 17:27:39 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:12.884 17:27:39 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:12.884 17:27:39 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:12.884 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:12.884 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:11:12.884 altname enp9s0f1np1 00:11:12.884 inet 192.168.100.9/24 scope global mlx_0_1 00:11:12.884 valid_lft forever preferred_lft forever 00:11:12.884 17:27:39 -- nvmf/common.sh@411 -- # return 0 00:11:12.884 17:27:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:12.884 17:27:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:12.884 17:27:39 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:12.884 17:27:39 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:12.884 17:27:39 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:12.884 17:27:39 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:12.884 17:27:39 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:12.884 17:27:39 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:12.884 17:27:39 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:12.884 17:27:39 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:12.884 17:27:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:12.884 17:27:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.884 17:27:39 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:12.884 17:27:39 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:12.884 17:27:39 -- nvmf/common.sh@105 -- # continue 2 00:11:12.884 17:27:39 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:12.884 17:27:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.884 17:27:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:12.884 17:27:39 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:12.884 17:27:39 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:12.884 17:27:39 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:12.884 17:27:39 -- nvmf/common.sh@105 -- # continue 2 00:11:12.884 17:27:39 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:12.884 17:27:39 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:12.884 17:27:39 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:12.884 17:27:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:12.884 17:27:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:12.884 17:27:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:12.884 17:27:39 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:12.884 17:27:39 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:12.884 17:27:39 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:12.884 17:27:39 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:12.884 17:27:39 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:12.884 17:27:39 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:12.884 17:27:39 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:12.884 192.168.100.9' 00:11:12.884 17:27:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:12.884 192.168.100.9' 00:11:12.884 17:27:39 -- nvmf/common.sh@446 -- # head -n 1 00:11:12.884 17:27:39 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:12.884 17:27:39 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:12.884 192.168.100.9' 00:11:12.884 17:27:39 -- nvmf/common.sh@447 -- # tail -n +2 00:11:12.884 17:27:39 -- nvmf/common.sh@447 -- # head -n 1 00:11:12.884 17:27:39 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:12.884 17:27:39 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:12.884 17:27:39 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:12.884 17:27:39 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:12.884 17:27:39 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:12.884 17:27:39 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:12.884 17:27:39 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:12.884 17:27:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:12.884 17:27:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:12.884 17:27:39 -- common/autotest_common.sh@10 -- # set +x 00:11:12.884 17:27:39 -- nvmf/common.sh@470 -- # nvmfpid=1977919 00:11:12.885 17:27:39 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:12.885 17:27:39 -- nvmf/common.sh@471 -- # waitforlisten 1977919 00:11:12.885 17:27:39 -- common/autotest_common.sh@817 -- # '[' -z 1977919 ']' 00:11:12.885 17:27:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.885 17:27:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:12.885 17:27:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.885 17:27:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:12.885 17:27:39 -- common/autotest_common.sh@10 -- # set +x 00:11:13.144 [2024-04-18 17:27:39.431857] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:11:13.144 [2024-04-18 17:27:39.431950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.144 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.144 [2024-04-18 17:27:39.490054] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.144 [2024-04-18 17:27:39.600805] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.144 [2024-04-18 17:27:39.600870] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.144 [2024-04-18 17:27:39.600898] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.144 [2024-04-18 17:27:39.600913] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.144 [2024-04-18 17:27:39.600924] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.144 [2024-04-18 17:27:39.601064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.144 [2024-04-18 17:27:39.601133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.144 [2024-04-18 17:27:39.601190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.144 [2024-04-18 17:27:39.601193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.144 17:27:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:13.144 17:27:39 -- common/autotest_common.sh@850 -- # return 0 00:11:13.144 17:27:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:13.144 17:27:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:13.144 17:27:39 -- common/autotest_common.sh@10 -- # set +x 00:11:13.144 17:27:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.144 17:27:39 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:13.144 17:27:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.144 17:27:39 -- common/autotest_common.sh@10 -- # set +x 00:11:13.144 17:27:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.144 17:27:39 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:13.144 17:27:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.144 17:27:39 -- common/autotest_common.sh@10 -- # set +x 00:11:13.404 17:27:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.404 17:27:39 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:13.404 17:27:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.404 17:27:39 -- common/autotest_common.sh@10 -- # set +x 00:11:13.404 [2024-04-18 17:27:39.779226] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1663ac0/0x1667fb0) succeed. 00:11:13.404 [2024-04-18 17:27:39.789606] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16650b0/0x16a9640) succeed. 00:11:13.404 17:27:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.404 17:27:39 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:13.404 17:27:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.404 17:27:39 -- common/autotest_common.sh@10 -- # set +x 00:11:13.664 Malloc0 00:11:13.664 17:27:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.664 17:27:39 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:13.664 17:27:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.664 17:27:39 -- common/autotest_common.sh@10 -- # set +x 00:11:13.664 17:27:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.664 17:27:39 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.664 17:27:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.664 17:27:39 -- common/autotest_common.sh@10 -- # set +x 00:11:13.665 17:27:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.665 17:27:39 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:13.665 17:27:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.665 17:27:39 -- common/autotest_common.sh@10 -- # set +x 00:11:13.665 [2024-04-18 17:27:39.992696] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:13.665 17:27:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.665 17:27:39 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1978061 00:11:13.665 17:27:39 -- target/bdev_io_wait.sh@30 -- # READ_PID=1978063 00:11:13.665 17:27:39 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:13.665 17:27:39 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:13.665 17:27:39 -- nvmf/common.sh@521 -- # config=() 00:11:13.665 17:27:39 -- nvmf/common.sh@521 -- # local subsystem config 00:11:13.665 17:27:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:13.665 17:27:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:13.665 { 00:11:13.665 "params": { 00:11:13.665 "name": "Nvme$subsystem", 00:11:13.665 "trtype": "$TEST_TRANSPORT", 00:11:13.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:13.665 "adrfam": "ipv4", 00:11:13.665 "trsvcid": "$NVMF_PORT", 00:11:13.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:13.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:13.665 "hdgst": ${hdgst:-false}, 00:11:13.665 "ddgst": ${ddgst:-false} 00:11:13.665 }, 00:11:13.665 "method": "bdev_nvme_attach_controller" 00:11:13.665 } 00:11:13.665 EOF 00:11:13.665 )") 00:11:13.665 17:27:39 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1978065 00:11:13.665 17:27:39 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:13.665 17:27:39 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:13.665 17:27:39 -- nvmf/common.sh@521 -- # config=() 00:11:13.665 17:27:39 -- nvmf/common.sh@521 -- # local subsystem config 00:11:13.665 17:27:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:13.665 17:27:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:13.665 { 00:11:13.665 "params": { 00:11:13.665 "name": "Nvme$subsystem", 00:11:13.665 "trtype": "$TEST_TRANSPORT", 00:11:13.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:13.665 "adrfam": "ipv4", 00:11:13.665 "trsvcid": "$NVMF_PORT", 00:11:13.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:13.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:13.665 "hdgst": ${hdgst:-false}, 00:11:13.665 "ddgst": ${ddgst:-false} 00:11:13.665 }, 00:11:13.665 "method": "bdev_nvme_attach_controller" 00:11:13.665 } 00:11:13.665 EOF 00:11:13.665 )") 00:11:13.665 17:27:39 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:13.665 17:27:39 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:13.665 17:27:39 -- nvmf/common.sh@521 -- # config=() 00:11:13.665 17:27:39 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1978067 00:11:13.665 17:27:39 -- nvmf/common.sh@521 -- # local subsystem config 00:11:13.665 17:27:39 -- target/bdev_io_wait.sh@35 -- # sync 00:11:13.665 17:27:39 -- nvmf/common.sh@543 -- # cat 00:11:13.665 17:27:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:13.665 17:27:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:13.665 { 00:11:13.665 "params": { 00:11:13.665 "name": "Nvme$subsystem", 00:11:13.665 "trtype": "$TEST_TRANSPORT", 00:11:13.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:13.665 "adrfam": "ipv4", 00:11:13.665 "trsvcid": "$NVMF_PORT", 00:11:13.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:13.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:13.665 "hdgst": ${hdgst:-false}, 00:11:13.665 "ddgst": ${ddgst:-false} 00:11:13.665 }, 00:11:13.665 "method": "bdev_nvme_attach_controller" 00:11:13.665 } 00:11:13.665 EOF 00:11:13.665 )") 00:11:13.665 17:27:39 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:13.665 17:27:39 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:13.665 17:27:39 -- nvmf/common.sh@543 -- # cat 00:11:13.665 17:27:39 -- nvmf/common.sh@521 -- # config=() 00:11:13.665 17:27:40 -- nvmf/common.sh@521 -- # local subsystem config 00:11:13.665 17:27:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:13.665 17:27:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:13.665 { 00:11:13.665 "params": { 00:11:13.665 "name": "Nvme$subsystem", 00:11:13.665 "trtype": "$TEST_TRANSPORT", 00:11:13.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:13.665 "adrfam": "ipv4", 00:11:13.665 "trsvcid": "$NVMF_PORT", 00:11:13.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:13.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:13.665 "hdgst": ${hdgst:-false}, 00:11:13.665 "ddgst": ${ddgst:-false} 00:11:13.665 }, 00:11:13.665 "method": "bdev_nvme_attach_controller" 00:11:13.665 } 00:11:13.665 EOF 00:11:13.665 )") 00:11:13.665 17:27:40 -- nvmf/common.sh@543 -- # cat 00:11:13.665 17:27:40 -- target/bdev_io_wait.sh@37 -- # wait 1978061 00:11:13.665 17:27:40 -- nvmf/common.sh@545 -- # jq . 00:11:13.665 17:27:40 -- nvmf/common.sh@543 -- # cat 00:11:13.665 17:27:40 -- nvmf/common.sh@545 -- # jq . 00:11:13.665 17:27:40 -- nvmf/common.sh@545 -- # jq . 00:11:13.665 17:27:40 -- nvmf/common.sh@546 -- # IFS=, 00:11:13.665 17:27:40 -- nvmf/common.sh@546 -- # IFS=, 00:11:13.665 17:27:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:13.665 "params": { 00:11:13.665 "name": "Nvme1", 00:11:13.665 "trtype": "rdma", 00:11:13.665 "traddr": "192.168.100.8", 00:11:13.665 "adrfam": "ipv4", 00:11:13.665 "trsvcid": "4420", 00:11:13.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:13.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:13.665 "hdgst": false, 00:11:13.665 "ddgst": false 00:11:13.665 }, 00:11:13.665 "method": "bdev_nvme_attach_controller" 00:11:13.665 }' 00:11:13.665 17:27:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:13.665 "params": { 00:11:13.665 "name": "Nvme1", 00:11:13.665 "trtype": "rdma", 00:11:13.665 "traddr": "192.168.100.8", 00:11:13.665 "adrfam": "ipv4", 00:11:13.665 "trsvcid": "4420", 00:11:13.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:13.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:13.665 "hdgst": false, 00:11:13.665 "ddgst": false 00:11:13.665 }, 00:11:13.665 "method": "bdev_nvme_attach_controller" 00:11:13.665 }' 00:11:13.665 17:27:40 -- nvmf/common.sh@545 -- # jq . 00:11:13.665 17:27:40 -- nvmf/common.sh@546 -- # IFS=, 00:11:13.665 17:27:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:13.665 "params": { 00:11:13.665 "name": "Nvme1", 00:11:13.665 "trtype": "rdma", 00:11:13.665 "traddr": "192.168.100.8", 00:11:13.665 "adrfam": "ipv4", 00:11:13.665 "trsvcid": "4420", 00:11:13.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:13.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:13.665 "hdgst": false, 00:11:13.665 "ddgst": false 00:11:13.665 }, 00:11:13.665 "method": "bdev_nvme_attach_controller" 00:11:13.665 }' 00:11:13.665 17:27:40 -- nvmf/common.sh@546 -- # IFS=, 00:11:13.665 17:27:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:13.665 "params": { 00:11:13.665 "name": "Nvme1", 00:11:13.665 "trtype": "rdma", 00:11:13.665 "traddr": "192.168.100.8", 00:11:13.665 "adrfam": "ipv4", 00:11:13.665 "trsvcid": "4420", 00:11:13.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:13.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:13.665 "hdgst": false, 00:11:13.665 "ddgst": false 00:11:13.665 }, 00:11:13.665 "method": "bdev_nvme_attach_controller" 00:11:13.665 }' 00:11:13.665 [2024-04-18 17:27:40.036818] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:11:13.665 [2024-04-18 17:27:40.036820] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:11:13.665 [2024-04-18 17:27:40.036821] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:11:13.665 [2024-04-18 17:27:40.036819] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:11:13.665 [2024-04-18 17:27:40.036913] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-18 17:27:40.036912] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-18 17:27:40.036913] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-18 17:27:40.036914] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:13.665 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:13.665 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:13.665 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:13.665 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.665 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.924 [2024-04-18 17:27:40.207073] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.924 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.924 [2024-04-18 17:27:40.302969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:13.924 [2024-04-18 17:27:40.309252] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.924 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.924 [2024-04-18 17:27:40.403751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:13.924 [2024-04-18 17:27:40.406744] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.182 [2024-04-18 17:27:40.503474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:14.182 [2024-04-18 17:27:40.516274] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.182 [2024-04-18 17:27:40.614971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:11:14.182 Running I/O for 1 seconds... 00:11:14.182 Running I/O for 1 seconds... 00:11:14.182 Running I/O for 1 seconds... 00:11:14.441 Running I/O for 1 seconds... 00:11:15.375 00:11:15.375 Latency(us) 00:11:15.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.375 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:15.375 Nvme1n1 : 1.00 205294.52 801.93 0.00 0.00 620.89 236.66 2087.44 00:11:15.375 =================================================================================================================== 00:11:15.375 Total : 205294.52 801.93 0.00 0.00 620.89 236.66 2087.44 00:11:15.375 00:11:15.375 Latency(us) 00:11:15.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.375 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:15.375 Nvme1n1 : 1.01 15860.75 61.96 0.00 0.00 8042.58 5655.51 15243.19 00:11:15.375 =================================================================================================================== 00:11:15.375 Total : 15860.75 61.96 0.00 0.00 8042.58 5655.51 15243.19 00:11:15.375 00:11:15.375 Latency(us) 00:11:15.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.375 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:15.375 Nvme1n1 : 1.01 15279.44 59.69 0.00 0.00 8347.98 5631.24 17864.63 00:11:15.375 =================================================================================================================== 00:11:15.375 Total : 15279.44 59.69 0.00 0.00 8347.98 5631.24 17864.63 00:11:15.375 00:11:15.375 Latency(us) 00:11:15.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.375 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:15.375 Nvme1n1 : 1.00 16828.83 65.74 0.00 0.00 7583.13 4636.07 16796.63 00:11:15.375 =================================================================================================================== 00:11:15.375 Total : 16828.83 65.74 0.00 0.00 7583.13 4636.07 16796.63 00:11:15.943 17:27:42 -- target/bdev_io_wait.sh@38 -- # wait 1978063 00:11:15.943 17:27:42 -- target/bdev_io_wait.sh@39 -- # wait 1978065 00:11:15.943 17:27:42 -- target/bdev_io_wait.sh@40 -- # wait 1978067 00:11:15.943 17:27:42 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.943 17:27:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.943 17:27:42 -- common/autotest_common.sh@10 -- # set +x 00:11:15.943 17:27:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.943 17:27:42 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:15.943 17:27:42 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:15.943 17:27:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:15.943 17:27:42 -- nvmf/common.sh@117 -- # sync 00:11:15.943 17:27:42 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:15.943 17:27:42 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:15.943 17:27:42 -- nvmf/common.sh@120 -- # set +e 00:11:15.943 17:27:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:15.943 17:27:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:15.943 rmmod nvme_rdma 00:11:15.943 rmmod nvme_fabrics 00:11:15.943 17:27:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:15.943 17:27:42 -- nvmf/common.sh@124 -- # set -e 00:11:15.943 17:27:42 -- nvmf/common.sh@125 -- # return 0 00:11:15.943 17:27:42 -- nvmf/common.sh@478 -- # '[' -n 1977919 ']' 00:11:15.943 17:27:42 -- nvmf/common.sh@479 -- # killprocess 1977919 00:11:15.943 17:27:42 -- common/autotest_common.sh@936 -- # '[' -z 1977919 ']' 00:11:15.943 17:27:42 -- common/autotest_common.sh@940 -- # kill -0 1977919 00:11:15.943 17:27:42 -- common/autotest_common.sh@941 -- # uname 00:11:15.943 17:27:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:15.943 17:27:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1977919 00:11:15.943 17:27:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:15.943 17:27:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:15.943 17:27:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1977919' 00:11:15.943 killing process with pid 1977919 00:11:15.943 17:27:42 -- common/autotest_common.sh@955 -- # kill 1977919 00:11:15.943 17:27:42 -- common/autotest_common.sh@960 -- # wait 1977919 00:11:16.200 17:27:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:16.200 17:27:42 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:16.200 00:11:16.200 real 0m5.369s 00:11:16.200 user 0m18.496s 00:11:16.200 sys 0m2.718s 00:11:16.200 17:27:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:16.200 17:27:42 -- common/autotest_common.sh@10 -- # set +x 00:11:16.200 ************************************ 00:11:16.200 END TEST nvmf_bdev_io_wait 00:11:16.200 ************************************ 00:11:16.200 17:27:42 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:11:16.200 17:27:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:16.201 17:27:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.201 17:27:42 -- common/autotest_common.sh@10 -- # set +x 00:11:16.458 ************************************ 00:11:16.458 START TEST nvmf_queue_depth 00:11:16.458 ************************************ 00:11:16.458 17:27:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:11:16.458 * Looking for test storage... 00:11:16.458 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:16.458 17:27:42 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.458 17:27:42 -- nvmf/common.sh@7 -- # uname -s 00:11:16.458 17:27:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.458 17:27:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.458 17:27:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.458 17:27:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.458 17:27:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.458 17:27:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.458 17:27:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.458 17:27:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.458 17:27:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.458 17:27:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.458 17:27:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:16.458 17:27:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:16.458 17:27:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.458 17:27:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.458 17:27:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.458 17:27:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.458 17:27:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:16.458 17:27:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.458 17:27:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.458 17:27:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.458 17:27:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.458 17:27:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.458 17:27:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.458 17:27:42 -- paths/export.sh@5 -- # export PATH 00:11:16.458 17:27:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.458 17:27:42 -- nvmf/common.sh@47 -- # : 0 00:11:16.458 17:27:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:16.458 17:27:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:16.458 17:27:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.458 17:27:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.458 17:27:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.458 17:27:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:16.458 17:27:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:16.458 17:27:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:16.458 17:27:42 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:16.458 17:27:42 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:16.458 17:27:42 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:16.458 17:27:42 -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:16.458 17:27:42 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:16.458 17:27:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.458 17:27:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:16.458 17:27:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:16.458 17:27:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:16.458 17:27:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.458 17:27:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:16.458 17:27:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.458 17:27:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:16.458 17:27:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:16.458 17:27:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:16.458 17:27:42 -- common/autotest_common.sh@10 -- # set +x 00:11:18.408 17:27:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:18.408 17:27:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:18.408 17:27:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:18.408 17:27:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:18.408 17:27:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:18.408 17:27:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:18.408 17:27:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:18.408 17:27:44 -- nvmf/common.sh@295 -- # net_devs=() 00:11:18.408 17:27:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:18.408 17:27:44 -- nvmf/common.sh@296 -- # e810=() 00:11:18.408 17:27:44 -- nvmf/common.sh@296 -- # local -ga e810 00:11:18.408 17:27:44 -- nvmf/common.sh@297 -- # x722=() 00:11:18.408 17:27:44 -- nvmf/common.sh@297 -- # local -ga x722 00:11:18.408 17:27:44 -- nvmf/common.sh@298 -- # mlx=() 00:11:18.408 17:27:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:18.408 17:27:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.408 17:27:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.408 17:27:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.408 17:27:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.408 17:27:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.408 17:27:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.408 17:27:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.408 17:27:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.408 17:27:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.408 17:27:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.408 17:27:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.408 17:27:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:18.408 17:27:44 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:18.408 17:27:44 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:18.408 17:27:44 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:18.408 17:27:44 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:18.408 17:27:44 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:18.408 17:27:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:18.408 17:27:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.408 17:27:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:11:18.408 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:11:18.408 17:27:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:18.408 17:27:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:18.408 17:27:44 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:18.408 17:27:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:18.408 17:27:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.408 17:27:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:11:18.408 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:11:18.408 17:27:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:18.408 17:27:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:18.408 17:27:44 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:18.408 17:27:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:18.408 17:27:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:18.408 17:27:44 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:18.408 17:27:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.408 17:27:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.408 17:27:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:18.408 17:27:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.408 17:27:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:11:18.408 Found net devices under 0000:09:00.0: mlx_0_0 00:11:18.408 17:27:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.408 17:27:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.408 17:27:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.408 17:27:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:18.408 17:27:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.408 17:27:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:11:18.408 Found net devices under 0000:09:00.1: mlx_0_1 00:11:18.408 17:27:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.408 17:27:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:18.408 17:27:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:18.408 17:27:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:18.408 17:27:44 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:18.408 17:27:44 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:18.409 17:27:44 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:18.409 17:27:44 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:18.409 17:27:44 -- nvmf/common.sh@58 -- # uname 00:11:18.409 17:27:44 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:18.409 17:27:44 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:18.409 17:27:44 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:18.409 17:27:44 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:18.409 17:27:44 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:18.409 17:27:44 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:18.409 17:27:44 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:18.409 17:27:44 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:18.409 17:27:44 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:18.409 17:27:44 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:18.409 17:27:44 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:18.409 17:27:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:18.409 17:27:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:18.409 17:27:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:18.409 17:27:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:18.409 17:27:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:18.409 17:27:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:18.409 17:27:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.409 17:27:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:18.409 17:27:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:18.409 17:27:44 -- nvmf/common.sh@105 -- # continue 2 00:11:18.409 17:27:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:18.409 17:27:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.409 17:27:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:18.409 17:27:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.409 17:27:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:18.409 17:27:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:18.409 17:27:44 -- nvmf/common.sh@105 -- # continue 2 00:11:18.409 17:27:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:18.409 17:27:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:18.409 17:27:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:18.409 17:27:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:18.409 17:27:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:18.409 17:27:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:18.409 17:27:44 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:18.409 17:27:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:18.409 17:27:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:18.409 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:18.409 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:11:18.409 altname enp9s0f0np0 00:11:18.409 inet 192.168.100.8/24 scope global mlx_0_0 00:11:18.409 valid_lft forever preferred_lft forever 00:11:18.409 17:27:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:18.409 17:27:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:18.409 17:27:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:18.409 17:27:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:18.409 17:27:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:18.409 17:27:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:18.409 17:27:44 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:18.409 17:27:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:18.409 17:27:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:18.409 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:18.409 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:11:18.409 altname enp9s0f1np1 00:11:18.409 inet 192.168.100.9/24 scope global mlx_0_1 00:11:18.409 valid_lft forever preferred_lft forever 00:11:18.409 17:27:44 -- nvmf/common.sh@411 -- # return 0 00:11:18.409 17:27:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:18.409 17:27:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:18.409 17:27:44 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:18.409 17:27:44 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:18.409 17:27:44 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:18.409 17:27:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:18.409 17:27:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:18.409 17:27:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:18.409 17:27:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:18.409 17:27:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:18.409 17:27:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:18.409 17:27:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.409 17:27:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:18.409 17:27:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:18.409 17:27:44 -- nvmf/common.sh@105 -- # continue 2 00:11:18.409 17:27:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:18.409 17:27:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.409 17:27:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:18.409 17:27:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:18.409 17:27:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:18.409 17:27:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:18.409 17:27:44 -- nvmf/common.sh@105 -- # continue 2 00:11:18.409 17:27:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:18.409 17:27:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:18.409 17:27:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:18.409 17:27:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:18.409 17:27:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:18.409 17:27:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:18.409 17:27:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:18.409 17:27:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:18.409 17:27:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:18.409 17:27:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:18.409 17:27:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:18.409 17:27:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:18.409 17:27:44 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:18.409 192.168.100.9' 00:11:18.409 17:27:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:18.409 192.168.100.9' 00:11:18.409 17:27:44 -- nvmf/common.sh@446 -- # head -n 1 00:11:18.409 17:27:44 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:18.409 17:27:44 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:18.409 192.168.100.9' 00:11:18.409 17:27:44 -- nvmf/common.sh@447 -- # tail -n +2 00:11:18.409 17:27:44 -- nvmf/common.sh@447 -- # head -n 1 00:11:18.409 17:27:44 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:18.409 17:27:44 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:18.409 17:27:44 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:18.409 17:27:44 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:18.409 17:27:44 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:18.409 17:27:44 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:18.409 17:27:44 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:18.409 17:27:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:18.409 17:27:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:18.409 17:27:44 -- common/autotest_common.sh@10 -- # set +x 00:11:18.409 17:27:44 -- nvmf/common.sh@470 -- # nvmfpid=1980031 00:11:18.409 17:27:44 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:18.409 17:27:44 -- nvmf/common.sh@471 -- # waitforlisten 1980031 00:11:18.409 17:27:44 -- common/autotest_common.sh@817 -- # '[' -z 1980031 ']' 00:11:18.409 17:27:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.409 17:27:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:18.409 17:27:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.409 17:27:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:18.409 17:27:44 -- common/autotest_common.sh@10 -- # set +x 00:11:18.668 [2024-04-18 17:27:44.962302] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:11:18.668 [2024-04-18 17:27:44.962411] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.668 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.668 [2024-04-18 17:27:45.019880] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.668 [2024-04-18 17:27:45.128675] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.668 [2024-04-18 17:27:45.128731] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.668 [2024-04-18 17:27:45.128760] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.668 [2024-04-18 17:27:45.128771] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.668 [2024-04-18 17:27:45.128780] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.668 [2024-04-18 17:27:45.128807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.927 17:27:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:18.927 17:27:45 -- common/autotest_common.sh@850 -- # return 0 00:11:18.927 17:27:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:18.927 17:27:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:18.927 17:27:45 -- common/autotest_common.sh@10 -- # set +x 00:11:18.927 17:27:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.927 17:27:45 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:18.927 17:27:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.927 17:27:45 -- common/autotest_common.sh@10 -- # set +x 00:11:18.927 [2024-04-18 17:27:45.293221] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1aa2cb0/0x1aa71a0) succeed. 00:11:18.927 [2024-04-18 17:27:45.305170] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aa41b0/0x1ae8830) succeed. 00:11:18.927 17:27:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.927 17:27:45 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:18.927 17:27:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.927 17:27:45 -- common/autotest_common.sh@10 -- # set +x 00:11:18.927 Malloc0 00:11:18.927 17:27:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.927 17:27:45 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:18.927 17:27:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.927 17:27:45 -- common/autotest_common.sh@10 -- # set +x 00:11:18.927 17:27:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.927 17:27:45 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:18.927 17:27:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.927 17:27:45 -- common/autotest_common.sh@10 -- # set +x 00:11:18.927 17:27:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.927 17:27:45 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:18.927 17:27:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.927 17:27:45 -- common/autotest_common.sh@10 -- # set +x 00:11:18.927 [2024-04-18 17:27:45.401742] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:18.927 17:27:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.927 17:27:45 -- target/queue_depth.sh@30 -- # bdevperf_pid=1980176 00:11:18.927 17:27:45 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:18.927 17:27:45 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:18.927 17:27:45 -- target/queue_depth.sh@33 -- # waitforlisten 1980176 /var/tmp/bdevperf.sock 00:11:18.927 17:27:45 -- common/autotest_common.sh@817 -- # '[' -z 1980176 ']' 00:11:18.927 17:27:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:18.927 17:27:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:18.927 17:27:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:18.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:18.927 17:27:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:18.927 17:27:45 -- common/autotest_common.sh@10 -- # set +x 00:11:18.927 [2024-04-18 17:27:45.444779] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:11:18.927 [2024-04-18 17:27:45.444855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1980176 ] 00:11:19.186 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.186 [2024-04-18 17:27:45.507331] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.186 [2024-04-18 17:27:45.620992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.444 17:27:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:19.444 17:27:45 -- common/autotest_common.sh@850 -- # return 0 00:11:19.444 17:27:45 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:19.444 17:27:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.444 17:27:45 -- common/autotest_common.sh@10 -- # set +x 00:11:19.444 NVMe0n1 00:11:19.444 17:27:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.444 17:27:45 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:19.444 Running I/O for 10 seconds... 00:11:31.652 00:11:31.652 Latency(us) 00:11:31.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.652 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:31.652 Verification LBA range: start 0x0 length 0x4000 00:11:31.652 NVMe0n1 : 10.06 13338.03 52.10 0.00 0.00 76538.79 28544.57 49321.91 00:11:31.652 =================================================================================================================== 00:11:31.652 Total : 13338.03 52.10 0.00 0.00 76538.79 28544.57 49321.91 00:11:31.652 0 00:11:31.652 17:27:56 -- target/queue_depth.sh@39 -- # killprocess 1980176 00:11:31.652 17:27:56 -- common/autotest_common.sh@936 -- # '[' -z 1980176 ']' 00:11:31.652 17:27:56 -- common/autotest_common.sh@940 -- # kill -0 1980176 00:11:31.652 17:27:56 -- common/autotest_common.sh@941 -- # uname 00:11:31.652 17:27:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:31.652 17:27:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1980176 00:11:31.652 17:27:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:31.652 17:27:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:31.652 17:27:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1980176' 00:11:31.652 killing process with pid 1980176 00:11:31.652 17:27:56 -- common/autotest_common.sh@955 -- # kill 1980176 00:11:31.652 Received shutdown signal, test time was about 10.000000 seconds 00:11:31.652 00:11:31.652 Latency(us) 00:11:31.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.652 =================================================================================================================== 00:11:31.652 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:31.652 17:27:56 -- common/autotest_common.sh@960 -- # wait 1980176 00:11:31.652 17:27:56 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:31.652 17:27:56 -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:31.652 17:27:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:31.652 17:27:56 -- nvmf/common.sh@117 -- # sync 00:11:31.652 17:27:56 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:31.652 17:27:56 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:31.652 17:27:56 -- nvmf/common.sh@120 -- # set +e 00:11:31.652 17:27:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:31.652 17:27:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:31.652 rmmod nvme_rdma 00:11:31.652 rmmod nvme_fabrics 00:11:31.652 17:27:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:31.652 17:27:56 -- nvmf/common.sh@124 -- # set -e 00:11:31.652 17:27:56 -- nvmf/common.sh@125 -- # return 0 00:11:31.652 17:27:56 -- nvmf/common.sh@478 -- # '[' -n 1980031 ']' 00:11:31.652 17:27:56 -- nvmf/common.sh@479 -- # killprocess 1980031 00:11:31.652 17:27:56 -- common/autotest_common.sh@936 -- # '[' -z 1980031 ']' 00:11:31.652 17:27:56 -- common/autotest_common.sh@940 -- # kill -0 1980031 00:11:31.652 17:27:56 -- common/autotest_common.sh@941 -- # uname 00:11:31.652 17:27:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:31.652 17:27:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1980031 00:11:31.652 17:27:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:31.652 17:27:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:31.652 17:27:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1980031' 00:11:31.652 killing process with pid 1980031 00:11:31.652 17:27:56 -- common/autotest_common.sh@955 -- # kill 1980031 00:11:31.652 17:27:56 -- common/autotest_common.sh@960 -- # wait 1980031 00:11:31.652 17:27:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:31.652 17:27:56 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:31.652 00:11:31.652 real 0m14.012s 00:11:31.652 user 0m23.481s 00:11:31.652 sys 0m2.091s 00:11:31.652 17:27:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:31.652 17:27:56 -- common/autotest_common.sh@10 -- # set +x 00:11:31.652 ************************************ 00:11:31.652 END TEST nvmf_queue_depth 00:11:31.652 ************************************ 00:11:31.652 17:27:56 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:11:31.652 17:27:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:31.652 17:27:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:31.652 17:27:56 -- common/autotest_common.sh@10 -- # set +x 00:11:31.652 ************************************ 00:11:31.652 START TEST nvmf_multipath 00:11:31.652 ************************************ 00:11:31.652 17:27:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:11:31.652 * Looking for test storage... 00:11:31.652 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:31.652 17:27:56 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.652 17:27:56 -- nvmf/common.sh@7 -- # uname -s 00:11:31.652 17:27:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.652 17:27:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.652 17:27:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.652 17:27:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.652 17:27:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.652 17:27:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.652 17:27:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.652 17:27:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.652 17:27:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.652 17:27:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.652 17:27:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:31.652 17:27:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:31.652 17:27:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.652 17:27:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.652 17:27:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.652 17:27:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.652 17:27:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:31.652 17:27:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.652 17:27:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.652 17:27:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.653 17:27:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.653 17:27:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.653 17:27:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.653 17:27:56 -- paths/export.sh@5 -- # export PATH 00:11:31.653 17:27:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.653 17:27:56 -- nvmf/common.sh@47 -- # : 0 00:11:31.653 17:27:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.653 17:27:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.653 17:27:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.653 17:27:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.653 17:27:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.653 17:27:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.653 17:27:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.653 17:27:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.653 17:27:56 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:31.653 17:27:56 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:31.653 17:27:56 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:31.653 17:27:56 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:31.653 17:27:56 -- target/multipath.sh@43 -- # nvmftestinit 00:11:31.653 17:27:56 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:31.653 17:27:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.653 17:27:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:31.653 17:27:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:31.653 17:27:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:31.653 17:27:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.653 17:27:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:31.653 17:27:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.653 17:27:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:31.653 17:27:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:31.653 17:27:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:31.653 17:27:56 -- common/autotest_common.sh@10 -- # set +x 00:11:32.588 17:27:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:32.588 17:27:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:32.588 17:27:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:32.588 17:27:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:32.588 17:27:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:32.588 17:27:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:32.588 17:27:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:32.588 17:27:59 -- nvmf/common.sh@295 -- # net_devs=() 00:11:32.588 17:27:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:32.588 17:27:59 -- nvmf/common.sh@296 -- # e810=() 00:11:32.588 17:27:59 -- nvmf/common.sh@296 -- # local -ga e810 00:11:32.588 17:27:59 -- nvmf/common.sh@297 -- # x722=() 00:11:32.588 17:27:59 -- nvmf/common.sh@297 -- # local -ga x722 00:11:32.588 17:27:59 -- nvmf/common.sh@298 -- # mlx=() 00:11:32.588 17:27:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:32.588 17:27:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.588 17:27:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.588 17:27:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.588 17:27:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.588 17:27:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.588 17:27:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.588 17:27:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.588 17:27:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.588 17:27:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.588 17:27:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.588 17:27:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.588 17:27:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:32.588 17:27:59 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:32.588 17:27:59 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:32.588 17:27:59 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:32.588 17:27:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:32.588 17:27:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:32.588 17:27:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:11:32.588 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:11:32.588 17:27:59 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:32.588 17:27:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:32.588 17:27:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:11:32.588 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:11:32.588 17:27:59 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:32.588 17:27:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:32.588 17:27:59 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:32.588 17:27:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.588 17:27:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:32.588 17:27:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.588 17:27:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:11:32.588 Found net devices under 0000:09:00.0: mlx_0_0 00:11:32.588 17:27:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.588 17:27:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:32.588 17:27:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.588 17:27:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:32.588 17:27:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.588 17:27:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:11:32.588 Found net devices under 0000:09:00.1: mlx_0_1 00:11:32.588 17:27:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.588 17:27:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:32.588 17:27:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:32.588 17:27:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:32.588 17:27:59 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:32.588 17:27:59 -- nvmf/common.sh@58 -- # uname 00:11:32.588 17:27:59 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:32.588 17:27:59 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:32.588 17:27:59 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:32.588 17:27:59 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:32.588 17:27:59 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:32.588 17:27:59 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:32.588 17:27:59 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:32.588 17:27:59 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:32.588 17:27:59 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:32.588 17:27:59 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:32.588 17:27:59 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:32.588 17:27:59 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:32.588 17:27:59 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:32.588 17:27:59 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:32.588 17:27:59 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:32.588 17:27:59 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:32.588 17:27:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:32.588 17:27:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.588 17:27:59 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:32.588 17:27:59 -- nvmf/common.sh@105 -- # continue 2 00:11:32.588 17:27:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:32.588 17:27:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.588 17:27:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.588 17:27:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:32.588 17:27:59 -- nvmf/common.sh@105 -- # continue 2 00:11:32.588 17:27:59 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:32.588 17:27:59 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:32.588 17:27:59 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:32.588 17:27:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:32.588 17:27:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:32.588 17:27:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:32.588 17:27:59 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:32.588 17:27:59 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:32.588 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:32.588 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:11:32.588 altname enp9s0f0np0 00:11:32.588 inet 192.168.100.8/24 scope global mlx_0_0 00:11:32.588 valid_lft forever preferred_lft forever 00:11:32.588 17:27:59 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:32.588 17:27:59 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:32.588 17:27:59 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:32.588 17:27:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:32.588 17:27:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:32.588 17:27:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:32.588 17:27:59 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:32.588 17:27:59 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:32.588 17:27:59 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:32.588 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:32.588 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:11:32.588 altname enp9s0f1np1 00:11:32.588 inet 192.168.100.9/24 scope global mlx_0_1 00:11:32.589 valid_lft forever preferred_lft forever 00:11:32.589 17:27:59 -- nvmf/common.sh@411 -- # return 0 00:11:32.589 17:27:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:32.589 17:27:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:32.589 17:27:59 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:32.589 17:27:59 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:32.589 17:27:59 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:32.589 17:27:59 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:32.589 17:27:59 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:32.589 17:27:59 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:32.589 17:27:59 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:32.589 17:27:59 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:32.589 17:27:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:32.589 17:27:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.589 17:27:59 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:32.589 17:27:59 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:32.589 17:27:59 -- nvmf/common.sh@105 -- # continue 2 00:11:32.589 17:27:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:32.589 17:27:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.589 17:27:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:32.589 17:27:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:32.589 17:27:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:32.589 17:27:59 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:32.589 17:27:59 -- nvmf/common.sh@105 -- # continue 2 00:11:32.589 17:27:59 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:32.589 17:27:59 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:32.589 17:27:59 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:32.589 17:27:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:32.589 17:27:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:32.589 17:27:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:32.589 17:27:59 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:32.589 17:27:59 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:32.589 17:27:59 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:32.589 17:27:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:32.589 17:27:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:32.589 17:27:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:32.589 17:27:59 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:32.589 192.168.100.9' 00:11:32.589 17:27:59 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:32.589 192.168.100.9' 00:11:32.589 17:27:59 -- nvmf/common.sh@446 -- # head -n 1 00:11:32.589 17:27:59 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:32.589 17:27:59 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:32.589 192.168.100.9' 00:11:32.589 17:27:59 -- nvmf/common.sh@447 -- # tail -n +2 00:11:32.589 17:27:59 -- nvmf/common.sh@447 -- # head -n 1 00:11:32.589 17:27:59 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:32.589 17:27:59 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:32.589 17:27:59 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:32.589 17:27:59 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:32.589 17:27:59 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:32.589 17:27:59 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:32.848 17:27:59 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:11:32.848 17:27:59 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:11:32.848 17:27:59 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:11:32.848 run this test only with TCP transport for now 00:11:32.848 17:27:59 -- target/multipath.sh@53 -- # nvmftestfini 00:11:32.848 17:27:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:32.848 17:27:59 -- nvmf/common.sh@117 -- # sync 00:11:32.848 17:27:59 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:32.848 17:27:59 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:32.848 17:27:59 -- nvmf/common.sh@120 -- # set +e 00:11:32.848 17:27:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:32.848 17:27:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:32.848 rmmod nvme_rdma 00:11:32.848 rmmod nvme_fabrics 00:11:32.848 17:27:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:32.848 17:27:59 -- nvmf/common.sh@124 -- # set -e 00:11:32.848 17:27:59 -- nvmf/common.sh@125 -- # return 0 00:11:32.848 17:27:59 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:11:32.848 17:27:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:32.848 17:27:59 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:32.848 17:27:59 -- target/multipath.sh@54 -- # exit 0 00:11:32.848 17:27:59 -- target/multipath.sh@1 -- # nvmftestfini 00:11:32.848 17:27:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:32.848 17:27:59 -- nvmf/common.sh@117 -- # sync 00:11:32.848 17:27:59 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:32.848 17:27:59 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:32.848 17:27:59 -- nvmf/common.sh@120 -- # set +e 00:11:32.848 17:27:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:32.848 17:27:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:32.848 17:27:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:32.848 17:27:59 -- nvmf/common.sh@124 -- # set -e 00:11:32.848 17:27:59 -- nvmf/common.sh@125 -- # return 0 00:11:32.848 17:27:59 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:11:32.848 17:27:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:32.848 17:27:59 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:32.848 00:11:32.848 real 0m2.275s 00:11:32.848 user 0m0.829s 00:11:32.848 sys 0m1.526s 00:11:32.848 17:27:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:32.848 17:27:59 -- common/autotest_common.sh@10 -- # set +x 00:11:32.848 ************************************ 00:11:32.848 END TEST nvmf_multipath 00:11:32.848 ************************************ 00:11:32.848 17:27:59 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:32.848 17:27:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:32.848 17:27:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:32.848 17:27:59 -- common/autotest_common.sh@10 -- # set +x 00:11:32.848 ************************************ 00:11:32.848 START TEST nvmf_zcopy 00:11:32.848 ************************************ 00:11:32.848 17:27:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:11:32.848 * Looking for test storage... 00:11:32.848 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:32.848 17:27:59 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.848 17:27:59 -- nvmf/common.sh@7 -- # uname -s 00:11:32.848 17:27:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.848 17:27:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.848 17:27:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.848 17:27:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.848 17:27:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.848 17:27:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.848 17:27:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.848 17:27:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.848 17:27:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.848 17:27:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.848 17:27:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.848 17:27:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:32.848 17:27:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.848 17:27:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.848 17:27:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.848 17:27:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.848 17:27:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:32.848 17:27:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.848 17:27:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.848 17:27:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.848 17:27:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.848 17:27:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.848 17:27:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.848 17:27:59 -- paths/export.sh@5 -- # export PATH 00:11:32.848 17:27:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.848 17:27:59 -- nvmf/common.sh@47 -- # : 0 00:11:32.848 17:27:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:32.848 17:27:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:32.848 17:27:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.848 17:27:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.849 17:27:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.849 17:27:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:32.849 17:27:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:32.849 17:27:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:32.849 17:27:59 -- target/zcopy.sh@12 -- # nvmftestinit 00:11:32.849 17:27:59 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:32.849 17:27:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.849 17:27:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:32.849 17:27:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:32.849 17:27:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:32.849 17:27:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.849 17:27:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:32.849 17:27:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.849 17:27:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:32.849 17:27:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:32.849 17:27:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:32.849 17:27:59 -- common/autotest_common.sh@10 -- # set +x 00:11:35.381 17:28:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:35.381 17:28:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.381 17:28:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.381 17:28:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.381 17:28:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.381 17:28:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.381 17:28:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.381 17:28:01 -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.381 17:28:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.381 17:28:01 -- nvmf/common.sh@296 -- # e810=() 00:11:35.381 17:28:01 -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.381 17:28:01 -- nvmf/common.sh@297 -- # x722=() 00:11:35.381 17:28:01 -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.381 17:28:01 -- nvmf/common.sh@298 -- # mlx=() 00:11:35.381 17:28:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.381 17:28:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.381 17:28:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.381 17:28:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.381 17:28:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.381 17:28:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.381 17:28:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.381 17:28:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.381 17:28:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.381 17:28:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.381 17:28:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.381 17:28:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.381 17:28:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.381 17:28:01 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:35.381 17:28:01 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:35.381 17:28:01 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:35.381 17:28:01 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:35.381 17:28:01 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:35.381 17:28:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.381 17:28:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.381 17:28:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:11:35.381 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:11:35.381 17:28:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:35.381 17:28:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:35.381 17:28:01 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:35.381 17:28:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:35.381 17:28:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.381 17:28:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:11:35.381 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:11:35.381 17:28:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:35.381 17:28:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:35.381 17:28:01 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:35.381 17:28:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:35.381 17:28:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.381 17:28:01 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:35.381 17:28:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.381 17:28:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.381 17:28:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:35.381 17:28:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.381 17:28:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:11:35.381 Found net devices under 0000:09:00.0: mlx_0_0 00:11:35.381 17:28:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.381 17:28:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.381 17:28:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.381 17:28:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:35.381 17:28:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.381 17:28:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:11:35.381 Found net devices under 0000:09:00.1: mlx_0_1 00:11:35.381 17:28:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.381 17:28:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:35.381 17:28:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:35.381 17:28:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:35.381 17:28:01 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:35.381 17:28:01 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:35.381 17:28:01 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:35.381 17:28:01 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:35.381 17:28:01 -- nvmf/common.sh@58 -- # uname 00:11:35.381 17:28:01 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:35.381 17:28:01 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:35.381 17:28:01 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:35.381 17:28:01 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:35.381 17:28:01 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:35.381 17:28:01 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:35.381 17:28:01 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:35.381 17:28:01 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:35.381 17:28:01 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:35.381 17:28:01 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:35.381 17:28:01 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:35.381 17:28:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:35.381 17:28:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:35.381 17:28:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:35.381 17:28:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:35.381 17:28:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:35.381 17:28:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:35.381 17:28:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.381 17:28:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:35.381 17:28:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:35.381 17:28:01 -- nvmf/common.sh@105 -- # continue 2 00:11:35.381 17:28:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:35.381 17:28:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.381 17:28:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:35.381 17:28:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.381 17:28:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:35.381 17:28:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:35.381 17:28:01 -- nvmf/common.sh@105 -- # continue 2 00:11:35.381 17:28:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:35.381 17:28:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:35.381 17:28:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:35.381 17:28:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:35.381 17:28:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:35.381 17:28:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:35.381 17:28:01 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:35.381 17:28:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:35.381 17:28:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:35.381 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:35.381 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:11:35.381 altname enp9s0f0np0 00:11:35.381 inet 192.168.100.8/24 scope global mlx_0_0 00:11:35.381 valid_lft forever preferred_lft forever 00:11:35.381 17:28:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:35.381 17:28:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:35.381 17:28:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:35.381 17:28:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:35.381 17:28:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:35.382 17:28:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:35.382 17:28:01 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:35.382 17:28:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:35.382 17:28:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:35.382 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:35.382 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:11:35.382 altname enp9s0f1np1 00:11:35.382 inet 192.168.100.9/24 scope global mlx_0_1 00:11:35.382 valid_lft forever preferred_lft forever 00:11:35.382 17:28:01 -- nvmf/common.sh@411 -- # return 0 00:11:35.382 17:28:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:35.382 17:28:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:35.382 17:28:01 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:35.382 17:28:01 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:35.382 17:28:01 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:35.382 17:28:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:35.382 17:28:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:35.382 17:28:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:35.382 17:28:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:35.382 17:28:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:35.382 17:28:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:35.382 17:28:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.382 17:28:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:35.382 17:28:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:35.382 17:28:01 -- nvmf/common.sh@105 -- # continue 2 00:11:35.382 17:28:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:35.382 17:28:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.382 17:28:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:35.382 17:28:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:35.382 17:28:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:35.382 17:28:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:35.382 17:28:01 -- nvmf/common.sh@105 -- # continue 2 00:11:35.382 17:28:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:35.382 17:28:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:35.382 17:28:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:35.382 17:28:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:35.382 17:28:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:35.382 17:28:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:35.382 17:28:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:35.382 17:28:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:35.382 17:28:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:35.382 17:28:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:35.382 17:28:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:35.382 17:28:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:35.382 17:28:01 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:35.382 192.168.100.9' 00:11:35.382 17:28:01 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:35.382 192.168.100.9' 00:11:35.382 17:28:01 -- nvmf/common.sh@446 -- # head -n 1 00:11:35.382 17:28:01 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:35.382 17:28:01 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:35.382 192.168.100.9' 00:11:35.382 17:28:01 -- nvmf/common.sh@447 -- # tail -n +2 00:11:35.382 17:28:01 -- nvmf/common.sh@447 -- # head -n 1 00:11:35.382 17:28:01 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:35.382 17:28:01 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:35.382 17:28:01 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:35.382 17:28:01 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:35.382 17:28:01 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:35.382 17:28:01 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:35.382 17:28:01 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:35.382 17:28:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:35.382 17:28:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:35.382 17:28:01 -- common/autotest_common.sh@10 -- # set +x 00:11:35.382 17:28:01 -- nvmf/common.sh@470 -- # nvmfpid=1984839 00:11:35.382 17:28:01 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:35.382 17:28:01 -- nvmf/common.sh@471 -- # waitforlisten 1984839 00:11:35.382 17:28:01 -- common/autotest_common.sh@817 -- # '[' -z 1984839 ']' 00:11:35.382 17:28:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.382 17:28:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:35.382 17:28:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.382 17:28:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:35.382 17:28:01 -- common/autotest_common.sh@10 -- # set +x 00:11:35.382 [2024-04-18 17:28:01.607317] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:11:35.382 [2024-04-18 17:28:01.607397] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.382 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.382 [2024-04-18 17:28:01.667774] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.382 [2024-04-18 17:28:01.782471] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.382 [2024-04-18 17:28:01.782530] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.382 [2024-04-18 17:28:01.782559] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.382 [2024-04-18 17:28:01.782571] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.382 [2024-04-18 17:28:01.782581] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.382 [2024-04-18 17:28:01.782616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.382 17:28:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:35.382 17:28:01 -- common/autotest_common.sh@850 -- # return 0 00:11:35.382 17:28:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:35.382 17:28:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:35.382 17:28:01 -- common/autotest_common.sh@10 -- # set +x 00:11:35.642 17:28:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.642 17:28:01 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:11:35.642 17:28:01 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:11:35.642 Unsupported transport: rdma 00:11:35.642 17:28:01 -- target/zcopy.sh@17 -- # exit 0 00:11:35.642 17:28:01 -- target/zcopy.sh@1 -- # process_shm --id 0 00:11:35.642 17:28:01 -- common/autotest_common.sh@794 -- # type=--id 00:11:35.642 17:28:01 -- common/autotest_common.sh@795 -- # id=0 00:11:35.642 17:28:01 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:11:35.642 17:28:01 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:35.642 17:28:01 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:11:35.642 17:28:01 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:11:35.642 17:28:01 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:11:35.642 17:28:01 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:35.642 nvmf_trace.0 00:11:35.642 17:28:01 -- common/autotest_common.sh@809 -- # return 0 00:11:35.642 17:28:01 -- target/zcopy.sh@1 -- # nvmftestfini 00:11:35.642 17:28:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:35.642 17:28:01 -- nvmf/common.sh@117 -- # sync 00:11:35.642 17:28:01 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:35.642 17:28:01 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:35.642 17:28:01 -- nvmf/common.sh@120 -- # set +e 00:11:35.642 17:28:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:35.642 17:28:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:35.642 rmmod nvme_rdma 00:11:35.642 rmmod nvme_fabrics 00:11:35.642 17:28:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:35.642 17:28:01 -- nvmf/common.sh@124 -- # set -e 00:11:35.642 17:28:01 -- nvmf/common.sh@125 -- # return 0 00:11:35.642 17:28:01 -- nvmf/common.sh@478 -- # '[' -n 1984839 ']' 00:11:35.642 17:28:01 -- nvmf/common.sh@479 -- # killprocess 1984839 00:11:35.642 17:28:01 -- common/autotest_common.sh@936 -- # '[' -z 1984839 ']' 00:11:35.642 17:28:01 -- common/autotest_common.sh@940 -- # kill -0 1984839 00:11:35.642 17:28:01 -- common/autotest_common.sh@941 -- # uname 00:11:35.642 17:28:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:35.642 17:28:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1984839 00:11:35.642 17:28:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:35.642 17:28:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:35.642 17:28:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1984839' 00:11:35.642 killing process with pid 1984839 00:11:35.642 17:28:02 -- common/autotest_common.sh@955 -- # kill 1984839 00:11:35.642 17:28:02 -- common/autotest_common.sh@960 -- # wait 1984839 00:11:35.901 17:28:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:35.901 17:28:02 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:35.902 00:11:35.902 real 0m2.998s 00:11:35.902 user 0m1.682s 00:11:35.902 sys 0m1.788s 00:11:35.902 17:28:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:35.902 17:28:02 -- common/autotest_common.sh@10 -- # set +x 00:11:35.902 ************************************ 00:11:35.902 END TEST nvmf_zcopy 00:11:35.902 ************************************ 00:11:35.902 17:28:02 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:35.902 17:28:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:35.902 17:28:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:35.902 17:28:02 -- common/autotest_common.sh@10 -- # set +x 00:11:35.902 ************************************ 00:11:35.902 START TEST nvmf_nmic 00:11:35.902 ************************************ 00:11:35.902 17:28:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:11:36.160 * Looking for test storage... 00:11:36.160 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:36.161 17:28:02 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.161 17:28:02 -- nvmf/common.sh@7 -- # uname -s 00:11:36.161 17:28:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.161 17:28:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.161 17:28:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.161 17:28:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.161 17:28:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.161 17:28:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.161 17:28:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.161 17:28:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.161 17:28:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.161 17:28:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.161 17:28:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:36.161 17:28:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:36.161 17:28:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.161 17:28:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.161 17:28:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.161 17:28:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.161 17:28:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:36.161 17:28:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.161 17:28:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.161 17:28:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.161 17:28:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.161 17:28:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.161 17:28:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.161 17:28:02 -- paths/export.sh@5 -- # export PATH 00:11:36.161 17:28:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.161 17:28:02 -- nvmf/common.sh@47 -- # : 0 00:11:36.161 17:28:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:36.161 17:28:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:36.161 17:28:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.161 17:28:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.161 17:28:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.161 17:28:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:36.161 17:28:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:36.161 17:28:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:36.161 17:28:02 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:36.161 17:28:02 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:36.161 17:28:02 -- target/nmic.sh@14 -- # nvmftestinit 00:11:36.161 17:28:02 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:36.161 17:28:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.161 17:28:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:36.161 17:28:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:36.161 17:28:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:36.161 17:28:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.161 17:28:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.161 17:28:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.161 17:28:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:36.161 17:28:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:36.161 17:28:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:36.161 17:28:02 -- common/autotest_common.sh@10 -- # set +x 00:11:38.062 17:28:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:38.062 17:28:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:38.062 17:28:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:38.062 17:28:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:38.062 17:28:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:38.062 17:28:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:38.062 17:28:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:38.062 17:28:04 -- nvmf/common.sh@295 -- # net_devs=() 00:11:38.062 17:28:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:38.062 17:28:04 -- nvmf/common.sh@296 -- # e810=() 00:11:38.063 17:28:04 -- nvmf/common.sh@296 -- # local -ga e810 00:11:38.063 17:28:04 -- nvmf/common.sh@297 -- # x722=() 00:11:38.063 17:28:04 -- nvmf/common.sh@297 -- # local -ga x722 00:11:38.063 17:28:04 -- nvmf/common.sh@298 -- # mlx=() 00:11:38.063 17:28:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:38.063 17:28:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.063 17:28:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.063 17:28:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.063 17:28:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.063 17:28:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.063 17:28:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.063 17:28:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.063 17:28:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.063 17:28:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.063 17:28:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.063 17:28:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.063 17:28:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:38.063 17:28:04 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:38.063 17:28:04 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:38.063 17:28:04 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:38.063 17:28:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:38.063 17:28:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.063 17:28:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:11:38.063 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:11:38.063 17:28:04 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:38.063 17:28:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.063 17:28:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:11:38.063 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:11:38.063 17:28:04 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:38.063 17:28:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:38.063 17:28:04 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.063 17:28:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.063 17:28:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:38.063 17:28:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.063 17:28:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:11:38.063 Found net devices under 0000:09:00.0: mlx_0_0 00:11:38.063 17:28:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.063 17:28:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.063 17:28:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.063 17:28:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:38.063 17:28:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.063 17:28:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:11:38.063 Found net devices under 0000:09:00.1: mlx_0_1 00:11:38.063 17:28:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.063 17:28:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:38.063 17:28:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:38.063 17:28:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:38.063 17:28:04 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:38.063 17:28:04 -- nvmf/common.sh@58 -- # uname 00:11:38.063 17:28:04 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:38.063 17:28:04 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:38.063 17:28:04 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:38.063 17:28:04 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:38.063 17:28:04 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:38.063 17:28:04 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:38.063 17:28:04 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:38.063 17:28:04 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:38.063 17:28:04 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:38.063 17:28:04 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:38.063 17:28:04 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:38.063 17:28:04 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:38.063 17:28:04 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:38.063 17:28:04 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:38.063 17:28:04 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:38.063 17:28:04 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:38.063 17:28:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:38.063 17:28:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:38.063 17:28:04 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:38.063 17:28:04 -- nvmf/common.sh@105 -- # continue 2 00:11:38.063 17:28:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:38.063 17:28:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:38.063 17:28:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:38.063 17:28:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:38.063 17:28:04 -- nvmf/common.sh@105 -- # continue 2 00:11:38.063 17:28:04 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:38.063 17:28:04 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:38.063 17:28:04 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:38.063 17:28:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:38.063 17:28:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:38.063 17:28:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:38.063 17:28:04 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:38.063 17:28:04 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:38.063 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:38.063 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:11:38.063 altname enp9s0f0np0 00:11:38.063 inet 192.168.100.8/24 scope global mlx_0_0 00:11:38.063 valid_lft forever preferred_lft forever 00:11:38.063 17:28:04 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:38.063 17:28:04 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:38.063 17:28:04 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:38.063 17:28:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:38.063 17:28:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:38.063 17:28:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:38.063 17:28:04 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:38.063 17:28:04 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:38.063 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:38.063 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:11:38.063 altname enp9s0f1np1 00:11:38.063 inet 192.168.100.9/24 scope global mlx_0_1 00:11:38.063 valid_lft forever preferred_lft forever 00:11:38.063 17:28:04 -- nvmf/common.sh@411 -- # return 0 00:11:38.063 17:28:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:38.063 17:28:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:38.063 17:28:04 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:38.063 17:28:04 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:38.063 17:28:04 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:38.063 17:28:04 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:38.063 17:28:04 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:38.063 17:28:04 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:38.063 17:28:04 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:38.063 17:28:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:38.063 17:28:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:38.063 17:28:04 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:38.063 17:28:04 -- nvmf/common.sh@105 -- # continue 2 00:11:38.063 17:28:04 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:38.063 17:28:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:38.063 17:28:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:38.063 17:28:04 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:38.063 17:28:04 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:38.063 17:28:04 -- nvmf/common.sh@105 -- # continue 2 00:11:38.063 17:28:04 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:38.063 17:28:04 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:38.063 17:28:04 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:38.063 17:28:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:38.063 17:28:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:38.063 17:28:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:38.063 17:28:04 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:38.064 17:28:04 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:38.064 17:28:04 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:38.064 17:28:04 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:38.064 17:28:04 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:38.064 17:28:04 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:38.064 17:28:04 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:38.064 192.168.100.9' 00:11:38.064 17:28:04 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:38.064 192.168.100.9' 00:11:38.064 17:28:04 -- nvmf/common.sh@446 -- # head -n 1 00:11:38.064 17:28:04 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:38.064 17:28:04 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:38.064 192.168.100.9' 00:11:38.064 17:28:04 -- nvmf/common.sh@447 -- # tail -n +2 00:11:38.064 17:28:04 -- nvmf/common.sh@447 -- # head -n 1 00:11:38.064 17:28:04 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:38.064 17:28:04 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:38.064 17:28:04 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:38.064 17:28:04 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:38.064 17:28:04 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:38.064 17:28:04 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:38.064 17:28:04 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:38.064 17:28:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:38.064 17:28:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:38.064 17:28:04 -- common/autotest_common.sh@10 -- # set +x 00:11:38.064 17:28:04 -- nvmf/common.sh@470 -- # nvmfpid=1986640 00:11:38.064 17:28:04 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:38.064 17:28:04 -- nvmf/common.sh@471 -- # waitforlisten 1986640 00:11:38.064 17:28:04 -- common/autotest_common.sh@817 -- # '[' -z 1986640 ']' 00:11:38.064 17:28:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.323 17:28:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:38.323 17:28:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.323 17:28:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:38.323 17:28:04 -- common/autotest_common.sh@10 -- # set +x 00:11:38.323 [2024-04-18 17:28:04.642754] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:11:38.323 [2024-04-18 17:28:04.642839] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.323 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.323 [2024-04-18 17:28:04.700606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.323 [2024-04-18 17:28:04.816709] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.323 [2024-04-18 17:28:04.816760] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.323 [2024-04-18 17:28:04.816790] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.323 [2024-04-18 17:28:04.816801] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.323 [2024-04-18 17:28:04.816811] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.323 [2024-04-18 17:28:04.816865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.323 [2024-04-18 17:28:04.816892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.323 [2024-04-18 17:28:04.816950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.323 [2024-04-18 17:28:04.816953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.583 17:28:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:38.583 17:28:04 -- common/autotest_common.sh@850 -- # return 0 00:11:38.583 17:28:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:38.583 17:28:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:38.583 17:28:04 -- common/autotest_common.sh@10 -- # set +x 00:11:38.583 17:28:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.583 17:28:04 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:38.583 17:28:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.583 17:28:04 -- common/autotest_common.sh@10 -- # set +x 00:11:38.583 [2024-04-18 17:28:05.000235] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x53bb60/0x540050) succeed. 00:11:38.583 [2024-04-18 17:28:05.011082] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x53d150/0x5816e0) succeed. 00:11:38.844 17:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.844 17:28:05 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:38.844 17:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.844 17:28:05 -- common/autotest_common.sh@10 -- # set +x 00:11:38.844 Malloc0 00:11:38.844 17:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.844 17:28:05 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:38.844 17:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.844 17:28:05 -- common/autotest_common.sh@10 -- # set +x 00:11:38.844 17:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.844 17:28:05 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:38.844 17:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.844 17:28:05 -- common/autotest_common.sh@10 -- # set +x 00:11:38.844 17:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.844 17:28:05 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:38.844 17:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.844 17:28:05 -- common/autotest_common.sh@10 -- # set +x 00:11:38.844 [2024-04-18 17:28:05.207584] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:38.844 17:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.844 17:28:05 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:38.844 test case1: single bdev can't be used in multiple subsystems 00:11:38.844 17:28:05 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:38.844 17:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.844 17:28:05 -- common/autotest_common.sh@10 -- # set +x 00:11:38.844 17:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.844 17:28:05 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:38.844 17:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.844 17:28:05 -- common/autotest_common.sh@10 -- # set +x 00:11:38.844 17:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.844 17:28:05 -- target/nmic.sh@28 -- # nmic_status=0 00:11:38.844 17:28:05 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:38.844 17:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.844 17:28:05 -- common/autotest_common.sh@10 -- # set +x 00:11:38.844 [2024-04-18 17:28:05.231345] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:38.844 [2024-04-18 17:28:05.231397] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:38.844 [2024-04-18 17:28:05.231415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.844 request: 00:11:38.844 { 00:11:38.844 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:38.844 "namespace": { 00:11:38.844 "bdev_name": "Malloc0", 00:11:38.844 "no_auto_visible": false 00:11:38.844 }, 00:11:38.844 "method": "nvmf_subsystem_add_ns", 00:11:38.844 "req_id": 1 00:11:38.844 } 00:11:38.844 Got JSON-RPC error response 00:11:38.844 response: 00:11:38.844 { 00:11:38.844 "code": -32602, 00:11:38.844 "message": "Invalid parameters" 00:11:38.844 } 00:11:38.844 17:28:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:11:38.844 17:28:05 -- target/nmic.sh@29 -- # nmic_status=1 00:11:38.844 17:28:05 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:38.844 17:28:05 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:38.844 Adding namespace failed - expected result. 00:11:38.844 17:28:05 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:38.844 test case2: host connect to nvmf target in multiple paths 00:11:38.844 17:28:05 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:11:38.844 17:28:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.844 17:28:05 -- common/autotest_common.sh@10 -- # set +x 00:11:38.845 [2024-04-18 17:28:05.239409] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:11:38.845 17:28:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.845 17:28:05 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:42.129 17:28:08 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:11:46.322 17:28:11 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:46.322 17:28:11 -- common/autotest_common.sh@1184 -- # local i=0 00:11:46.322 17:28:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.322 17:28:11 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:46.322 17:28:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:47.705 17:28:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:47.705 17:28:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:47.705 17:28:13 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.705 17:28:13 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:47.705 17:28:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.705 17:28:13 -- common/autotest_common.sh@1194 -- # return 0 00:11:47.705 17:28:13 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:47.705 [global] 00:11:47.705 thread=1 00:11:47.705 invalidate=1 00:11:47.705 rw=write 00:11:47.705 time_based=1 00:11:47.705 runtime=1 00:11:47.705 ioengine=libaio 00:11:47.705 direct=1 00:11:47.705 bs=4096 00:11:47.705 iodepth=1 00:11:47.705 norandommap=0 00:11:47.705 numjobs=1 00:11:47.705 00:11:47.705 verify_dump=1 00:11:47.705 verify_backlog=512 00:11:47.705 verify_state_save=0 00:11:47.705 do_verify=1 00:11:47.705 verify=crc32c-intel 00:11:47.705 [job0] 00:11:47.705 filename=/dev/nvme0n1 00:11:47.705 Could not set queue depth (nvme0n1) 00:11:47.705 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:47.705 fio-3.35 00:11:47.705 Starting 1 thread 00:11:49.112 00:11:49.112 job0: (groupid=0, jobs=1): err= 0: pid=1987855: Thu Apr 18 17:28:15 2024 00:11:49.112 read: IOPS=6071, BW=23.7MiB/s (24.9MB/s)(23.7MiB/1000msec) 00:11:49.112 slat (nsec): min=5827, max=38781, avg=10474.22, stdev=3652.07 00:11:49.112 clat (usec): min=56, max=149, avg=70.97, stdev= 9.02 00:11:49.112 lat (usec): min=63, max=157, avg=81.44, stdev=10.59 00:11:49.112 clat percentiles (usec): 00:11:49.112 | 1.00th=[ 60], 5.00th=[ 62], 10.00th=[ 63], 20.00th=[ 65], 00:11:49.112 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 70], 60.00th=[ 71], 00:11:49.112 | 70.00th=[ 73], 80.00th=[ 76], 90.00th=[ 85], 95.00th=[ 91], 00:11:49.112 | 99.00th=[ 102], 99.50th=[ 106], 99.90th=[ 114], 99.95th=[ 116], 00:11:49.112 | 99.99th=[ 151] 00:11:49.112 write: IOPS=6144, BW=24.0MiB/s (25.2MB/s)(24.0MiB/1000msec); 0 zone resets 00:11:49.112 slat (nsec): min=6527, max=41250, avg=10978.77, stdev=3999.60 00:11:49.112 clat (usec): min=50, max=171, avg=65.43, stdev= 9.05 00:11:49.112 lat (usec): min=57, max=185, avg=76.41, stdev=11.10 00:11:49.112 clat percentiles (usec): 00:11:49.112 | 1.00th=[ 55], 5.00th=[ 56], 10.00th=[ 58], 20.00th=[ 59], 00:11:49.112 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 64], 60.00th=[ 65], 00:11:49.112 | 70.00th=[ 68], 80.00th=[ 71], 90.00th=[ 78], 95.00th=[ 86], 00:11:49.112 | 99.00th=[ 97], 99.50th=[ 101], 99.90th=[ 109], 99.95th=[ 112], 00:11:49.112 | 99.99th=[ 172] 00:11:49.112 bw ( KiB/s): min=24576, max=24576, per=100.00%, avg=24576.00, stdev= 0.00, samples=1 00:11:49.112 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:11:49.112 lat (usec) : 100=99.03%, 250=0.97% 00:11:49.112 cpu : usr=8.50%, sys=11.70%, ctx=12215, majf=0, minf=2 00:11:49.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:49.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.112 issued rwts: total=6071,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:49.112 00:11:49.112 Run status group 0 (all jobs): 00:11:49.112 READ: bw=23.7MiB/s (24.9MB/s), 23.7MiB/s-23.7MiB/s (24.9MB/s-24.9MB/s), io=23.7MiB (24.9MB), run=1000-1000msec 00:11:49.112 WRITE: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=24.0MiB (25.2MB), run=1000-1000msec 00:11:49.112 00:11:49.112 Disk stats (read/write): 00:11:49.112 nvme0n1: ios=5461/5632, merge=0/0, ticks=383/376, in_queue=759, util=90.88% 00:11:49.112 17:28:15 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:53.305 17:28:19 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.305 17:28:19 -- common/autotest_common.sh@1205 -- # local i=0 00:11:53.305 17:28:19 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:53.305 17:28:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.305 17:28:19 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:53.305 17:28:19 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.305 17:28:19 -- common/autotest_common.sh@1217 -- # return 0 00:11:53.305 17:28:19 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:53.305 17:28:19 -- target/nmic.sh@53 -- # nvmftestfini 00:11:53.305 17:28:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:53.305 17:28:19 -- nvmf/common.sh@117 -- # sync 00:11:53.305 17:28:19 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:53.305 17:28:19 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:53.305 17:28:19 -- nvmf/common.sh@120 -- # set +e 00:11:53.305 17:28:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:53.305 17:28:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:53.305 rmmod nvme_rdma 00:11:53.305 rmmod nvme_fabrics 00:11:53.305 17:28:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:53.305 17:28:19 -- nvmf/common.sh@124 -- # set -e 00:11:53.305 17:28:19 -- nvmf/common.sh@125 -- # return 0 00:11:53.305 17:28:19 -- nvmf/common.sh@478 -- # '[' -n 1986640 ']' 00:11:53.305 17:28:19 -- nvmf/common.sh@479 -- # killprocess 1986640 00:11:53.305 17:28:19 -- common/autotest_common.sh@936 -- # '[' -z 1986640 ']' 00:11:53.305 17:28:19 -- common/autotest_common.sh@940 -- # kill -0 1986640 00:11:53.305 17:28:19 -- common/autotest_common.sh@941 -- # uname 00:11:53.305 17:28:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:53.305 17:28:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1986640 00:11:53.563 17:28:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:53.563 17:28:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:53.563 17:28:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1986640' 00:11:53.563 killing process with pid 1986640 00:11:53.563 17:28:19 -- common/autotest_common.sh@955 -- # kill 1986640 00:11:53.563 17:28:19 -- common/autotest_common.sh@960 -- # wait 1986640 00:11:53.821 17:28:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:53.821 17:28:20 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:53.821 00:11:53.821 real 0m17.808s 00:11:53.821 user 1m2.209s 00:11:53.821 sys 0m2.274s 00:11:53.821 17:28:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:53.822 17:28:20 -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 ************************************ 00:11:53.822 END TEST nvmf_nmic 00:11:53.822 ************************************ 00:11:53.822 17:28:20 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:53.822 17:28:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:53.822 17:28:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:53.822 17:28:20 -- common/autotest_common.sh@10 -- # set +x 00:11:53.822 ************************************ 00:11:53.822 START TEST nvmf_fio_target 00:11:53.822 ************************************ 00:11:53.822 17:28:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:54.080 * Looking for test storage... 00:11:54.080 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:54.080 17:28:20 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.080 17:28:20 -- nvmf/common.sh@7 -- # uname -s 00:11:54.080 17:28:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.080 17:28:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.080 17:28:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.080 17:28:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.080 17:28:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.080 17:28:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.080 17:28:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.080 17:28:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.080 17:28:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.080 17:28:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.080 17:28:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:54.080 17:28:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:54.080 17:28:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.080 17:28:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.080 17:28:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.080 17:28:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.080 17:28:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:54.080 17:28:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.080 17:28:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.080 17:28:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.080 17:28:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.080 17:28:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.080 17:28:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.080 17:28:20 -- paths/export.sh@5 -- # export PATH 00:11:54.080 17:28:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.080 17:28:20 -- nvmf/common.sh@47 -- # : 0 00:11:54.080 17:28:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:54.080 17:28:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:54.080 17:28:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.081 17:28:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.081 17:28:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.081 17:28:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:54.081 17:28:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:54.081 17:28:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:54.081 17:28:20 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:54.081 17:28:20 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:54.081 17:28:20 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:54.081 17:28:20 -- target/fio.sh@16 -- # nvmftestinit 00:11:54.081 17:28:20 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:54.081 17:28:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.081 17:28:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:54.081 17:28:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:54.081 17:28:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:54.081 17:28:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.081 17:28:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.081 17:28:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.081 17:28:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:54.081 17:28:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:54.081 17:28:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:54.081 17:28:20 -- common/autotest_common.sh@10 -- # set +x 00:11:55.987 17:28:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:55.987 17:28:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:55.987 17:28:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:55.987 17:28:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:55.987 17:28:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:55.987 17:28:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:55.987 17:28:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:55.987 17:28:22 -- nvmf/common.sh@295 -- # net_devs=() 00:11:55.987 17:28:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:55.987 17:28:22 -- nvmf/common.sh@296 -- # e810=() 00:11:55.987 17:28:22 -- nvmf/common.sh@296 -- # local -ga e810 00:11:55.987 17:28:22 -- nvmf/common.sh@297 -- # x722=() 00:11:55.987 17:28:22 -- nvmf/common.sh@297 -- # local -ga x722 00:11:55.987 17:28:22 -- nvmf/common.sh@298 -- # mlx=() 00:11:55.987 17:28:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:55.987 17:28:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:55.987 17:28:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:55.987 17:28:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:55.987 17:28:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:55.987 17:28:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:55.987 17:28:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:55.987 17:28:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:55.987 17:28:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:55.987 17:28:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:55.987 17:28:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:55.987 17:28:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:55.987 17:28:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:55.987 17:28:22 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:55.987 17:28:22 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:55.987 17:28:22 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:55.987 17:28:22 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:55.987 17:28:22 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:55.987 17:28:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:55.987 17:28:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:55.987 17:28:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:11:55.987 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:11:55.987 17:28:22 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:55.987 17:28:22 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:55.987 17:28:22 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:55.987 17:28:22 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:55.987 17:28:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:55.987 17:28:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:11:55.987 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:11:55.987 17:28:22 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:55.987 17:28:22 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:55.987 17:28:22 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:11:55.987 17:28:22 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:55.987 17:28:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:55.987 17:28:22 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:55.987 17:28:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:55.987 17:28:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.987 17:28:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:55.987 17:28:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.987 17:28:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:11:55.987 Found net devices under 0000:09:00.0: mlx_0_0 00:11:55.987 17:28:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.987 17:28:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:55.987 17:28:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.987 17:28:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:55.988 17:28:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.988 17:28:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:11:55.988 Found net devices under 0000:09:00.1: mlx_0_1 00:11:55.988 17:28:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.988 17:28:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:55.988 17:28:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:55.988 17:28:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:55.988 17:28:22 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:55.988 17:28:22 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:55.988 17:28:22 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:55.988 17:28:22 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:55.988 17:28:22 -- nvmf/common.sh@58 -- # uname 00:11:55.988 17:28:22 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:55.988 17:28:22 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:55.988 17:28:22 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:55.988 17:28:22 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:55.988 17:28:22 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:55.988 17:28:22 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:55.988 17:28:22 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:55.988 17:28:22 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:55.988 17:28:22 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:55.988 17:28:22 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:55.988 17:28:22 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:55.988 17:28:22 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:55.988 17:28:22 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:55.988 17:28:22 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:55.988 17:28:22 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:55.988 17:28:22 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:55.988 17:28:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:55.988 17:28:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.988 17:28:22 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:55.988 17:28:22 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:55.988 17:28:22 -- nvmf/common.sh@105 -- # continue 2 00:11:55.988 17:28:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:55.988 17:28:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.988 17:28:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:55.988 17:28:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.988 17:28:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:55.988 17:28:22 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:55.988 17:28:22 -- nvmf/common.sh@105 -- # continue 2 00:11:55.988 17:28:22 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:55.988 17:28:22 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:55.988 17:28:22 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:55.988 17:28:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:55.988 17:28:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:55.988 17:28:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:55.988 17:28:22 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:55.988 17:28:22 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:55.988 17:28:22 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:55.988 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:55.988 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:11:55.988 altname enp9s0f0np0 00:11:55.988 inet 192.168.100.8/24 scope global mlx_0_0 00:11:55.988 valid_lft forever preferred_lft forever 00:11:55.988 17:28:22 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:55.988 17:28:22 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:55.988 17:28:22 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:55.988 17:28:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:55.988 17:28:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:55.988 17:28:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:55.988 17:28:22 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:55.988 17:28:22 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:55.988 17:28:22 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:55.988 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:55.988 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:11:55.988 altname enp9s0f1np1 00:11:55.988 inet 192.168.100.9/24 scope global mlx_0_1 00:11:55.988 valid_lft forever preferred_lft forever 00:11:55.988 17:28:22 -- nvmf/common.sh@411 -- # return 0 00:11:55.988 17:28:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:55.988 17:28:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:55.988 17:28:22 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:55.988 17:28:22 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:55.988 17:28:22 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:55.988 17:28:22 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:55.988 17:28:22 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:55.988 17:28:22 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:55.988 17:28:22 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:55.988 17:28:22 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:55.988 17:28:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:55.988 17:28:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.988 17:28:22 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:55.988 17:28:22 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:55.988 17:28:22 -- nvmf/common.sh@105 -- # continue 2 00:11:55.988 17:28:22 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:55.988 17:28:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.988 17:28:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:55.988 17:28:22 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.988 17:28:22 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:55.988 17:28:22 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:55.988 17:28:22 -- nvmf/common.sh@105 -- # continue 2 00:11:55.988 17:28:22 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:55.988 17:28:22 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:55.988 17:28:22 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:55.988 17:28:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:55.988 17:28:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:55.988 17:28:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:55.988 17:28:22 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:55.988 17:28:22 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:55.988 17:28:22 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:55.988 17:28:22 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:55.988 17:28:22 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:55.988 17:28:22 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:55.988 17:28:22 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:55.988 192.168.100.9' 00:11:55.988 17:28:22 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:55.988 192.168.100.9' 00:11:55.988 17:28:22 -- nvmf/common.sh@446 -- # head -n 1 00:11:55.988 17:28:22 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:55.988 17:28:22 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:55.988 192.168.100.9' 00:11:55.988 17:28:22 -- nvmf/common.sh@447 -- # tail -n +2 00:11:55.988 17:28:22 -- nvmf/common.sh@447 -- # head -n 1 00:11:55.988 17:28:22 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:55.988 17:28:22 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:55.988 17:28:22 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:55.988 17:28:22 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:55.988 17:28:22 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:55.988 17:28:22 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:55.988 17:28:22 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:55.988 17:28:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:55.988 17:28:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:55.988 17:28:22 -- common/autotest_common.sh@10 -- # set +x 00:11:55.988 17:28:22 -- nvmf/common.sh@470 -- # nvmfpid=1990210 00:11:55.988 17:28:22 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.988 17:28:22 -- nvmf/common.sh@471 -- # waitforlisten 1990210 00:11:55.988 17:28:22 -- common/autotest_common.sh@817 -- # '[' -z 1990210 ']' 00:11:55.988 17:28:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.988 17:28:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:55.988 17:28:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.988 17:28:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:55.988 17:28:22 -- common/autotest_common.sh@10 -- # set +x 00:11:55.988 [2024-04-18 17:28:22.449999] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:11:55.988 [2024-04-18 17:28:22.450093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.988 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.988 [2024-04-18 17:28:22.507823] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.248 [2024-04-18 17:28:22.619798] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.248 [2024-04-18 17:28:22.619859] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.248 [2024-04-18 17:28:22.619887] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.248 [2024-04-18 17:28:22.619898] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.248 [2024-04-18 17:28:22.619908] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.248 [2024-04-18 17:28:22.620040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.248 [2024-04-18 17:28:22.620105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.248 [2024-04-18 17:28:22.620172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.248 [2024-04-18 17:28:22.620174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.248 17:28:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:56.248 17:28:22 -- common/autotest_common.sh@850 -- # return 0 00:11:56.248 17:28:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:56.248 17:28:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:56.248 17:28:22 -- common/autotest_common.sh@10 -- # set +x 00:11:56.248 17:28:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.248 17:28:22 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:56.507 [2024-04-18 17:28:23.014149] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2434b60/0x2439050) succeed. 00:11:56.507 [2024-04-18 17:28:23.024877] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2436150/0x247a6e0) succeed. 00:11:56.766 17:28:23 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.028 17:28:23 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:57.028 17:28:23 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.285 17:28:23 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:57.285 17:28:23 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.543 17:28:23 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:57.543 17:28:23 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.801 17:28:24 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:57.801 17:28:24 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:58.059 17:28:24 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.317 17:28:24 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:58.317 17:28:24 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.576 17:28:25 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:58.576 17:28:25 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.834 17:28:25 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:58.834 17:28:25 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:59.092 17:28:25 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:59.350 17:28:25 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:59.350 17:28:25 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:59.608 17:28:26 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:59.608 17:28:26 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:59.866 17:28:26 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:00.124 [2024-04-18 17:28:26.478805] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:00.124 17:28:26 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:00.382 17:28:26 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:00.641 17:28:26 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:03.933 17:28:30 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:03.933 17:28:30 -- common/autotest_common.sh@1184 -- # local i=0 00:12:03.933 17:28:30 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.933 17:28:30 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:12:03.933 17:28:30 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:12:03.933 17:28:30 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:06.471 17:28:32 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:06.471 17:28:32 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:06.471 17:28:32 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.471 17:28:32 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:12:06.471 17:28:32 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.471 17:28:32 -- common/autotest_common.sh@1194 -- # return 0 00:12:06.471 17:28:32 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:06.471 [global] 00:12:06.471 thread=1 00:12:06.471 invalidate=1 00:12:06.471 rw=write 00:12:06.471 time_based=1 00:12:06.471 runtime=1 00:12:06.471 ioengine=libaio 00:12:06.471 direct=1 00:12:06.471 bs=4096 00:12:06.471 iodepth=1 00:12:06.471 norandommap=0 00:12:06.471 numjobs=1 00:12:06.471 00:12:06.471 verify_dump=1 00:12:06.471 verify_backlog=512 00:12:06.471 verify_state_save=0 00:12:06.471 do_verify=1 00:12:06.471 verify=crc32c-intel 00:12:06.471 [job0] 00:12:06.471 filename=/dev/nvme0n1 00:12:06.471 [job1] 00:12:06.471 filename=/dev/nvme0n2 00:12:06.471 [job2] 00:12:06.471 filename=/dev/nvme0n3 00:12:06.471 [job3] 00:12:06.471 filename=/dev/nvme0n4 00:12:06.471 Could not set queue depth (nvme0n1) 00:12:06.471 Could not set queue depth (nvme0n2) 00:12:06.471 Could not set queue depth (nvme0n3) 00:12:06.471 Could not set queue depth (nvme0n4) 00:12:06.471 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.471 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.471 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.471 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.471 fio-3.35 00:12:06.471 Starting 4 threads 00:12:07.408 00:12:07.408 job0: (groupid=0, jobs=1): err= 0: pid=1991568: Thu Apr 18 17:28:33 2024 00:12:07.408 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:12:07.408 slat (nsec): min=5635, max=36934, avg=12062.91, stdev=3223.39 00:12:07.408 clat (usec): min=70, max=176, avg=89.73, stdev=14.18 00:12:07.408 lat (usec): min=77, max=191, avg=101.79, stdev=13.69 00:12:07.408 clat percentiles (usec): 00:12:07.408 | 1.00th=[ 74], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 82], 00:12:07.408 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 87], 00:12:07.408 | 70.00th=[ 89], 80.00th=[ 93], 90.00th=[ 117], 95.00th=[ 125], 00:12:07.408 | 99.00th=[ 141], 99.50th=[ 145], 99.90th=[ 157], 99.95th=[ 165], 00:12:07.408 | 99.99th=[ 178] 00:12:07.408 write: IOPS=5025, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1001msec); 0 zone resets 00:12:07.408 slat (nsec): min=5587, max=46203, avg=12875.27, stdev=4163.49 00:12:07.408 clat (usec): min=64, max=327, avg=86.30, stdev=16.71 00:12:07.408 lat (usec): min=70, max=344, avg=99.17, stdev=15.97 00:12:07.408 clat percentiles (usec): 00:12:07.408 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 76], 00:12:07.408 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 82], 00:12:07.408 | 70.00th=[ 85], 80.00th=[ 98], 90.00th=[ 114], 95.00th=[ 121], 00:12:07.408 | 99.00th=[ 141], 99.50th=[ 149], 99.90th=[ 163], 99.95th=[ 174], 00:12:07.408 | 99.99th=[ 326] 00:12:07.408 bw ( KiB/s): min=19768, max=19768, per=29.70%, avg=19768.00, stdev= 0.00, samples=1 00:12:07.408 iops : min= 4942, max= 4942, avg=4942.00, stdev= 0.00, samples=1 00:12:07.408 lat (usec) : 100=83.81%, 250=16.18%, 500=0.01% 00:12:07.408 cpu : usr=7.20%, sys=11.40%, ctx=9639, majf=0, minf=1 00:12:07.408 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.408 issued rwts: total=4608,5031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.408 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.408 job1: (groupid=0, jobs=1): err= 0: pid=1991569: Thu Apr 18 17:28:33 2024 00:12:07.408 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:12:07.408 slat (nsec): min=4813, max=17808, avg=5588.07, stdev=666.55 00:12:07.408 clat (usec): min=76, max=229, avg=130.56, stdev=21.82 00:12:07.408 lat (usec): min=82, max=247, avg=136.15, stdev=21.83 00:12:07.408 clat percentiles (usec): 00:12:07.408 | 1.00th=[ 87], 5.00th=[ 95], 10.00th=[ 99], 20.00th=[ 109], 00:12:07.408 | 30.00th=[ 122], 40.00th=[ 128], 50.00th=[ 133], 60.00th=[ 139], 00:12:07.408 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 163], 00:12:07.408 | 99.00th=[ 188], 99.50th=[ 200], 99.90th=[ 215], 99.95th=[ 221], 00:12:07.408 | 99.99th=[ 231] 00:12:07.408 write: IOPS=3886, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1001msec); 0 zone resets 00:12:07.408 slat (nsec): min=5133, max=44097, avg=6023.35, stdev=1064.72 00:12:07.408 clat (usec): min=68, max=323, avg=122.91, stdev=21.09 00:12:07.408 lat (usec): min=79, max=328, avg=128.93, stdev=21.04 00:12:07.408 clat percentiles (usec): 00:12:07.408 | 1.00th=[ 80], 5.00th=[ 90], 10.00th=[ 94], 20.00th=[ 105], 00:12:07.408 | 30.00th=[ 113], 40.00th=[ 118], 50.00th=[ 123], 60.00th=[ 129], 00:12:07.408 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 155], 00:12:07.408 | 99.00th=[ 176], 99.50th=[ 192], 99.90th=[ 210], 99.95th=[ 221], 00:12:07.408 | 99.99th=[ 322] 00:12:07.408 bw ( KiB/s): min=16384, max=16384, per=24.62%, avg=16384.00, stdev= 0.00, samples=1 00:12:07.408 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:12:07.408 lat (usec) : 100=14.34%, 250=85.64%, 500=0.01% 00:12:07.408 cpu : usr=2.70%, sys=6.00%, ctx=7474, majf=0, minf=1 00:12:07.408 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.408 issued rwts: total=3584,3890,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.408 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.408 job2: (groupid=0, jobs=1): err= 0: pid=1991570: Thu Apr 18 17:28:33 2024 00:12:07.408 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:12:07.408 slat (nsec): min=5575, max=44600, avg=12244.40, stdev=3616.04 00:12:07.408 clat (usec): min=83, max=151, avg=106.04, stdev= 7.82 00:12:07.408 lat (usec): min=90, max=175, avg=118.29, stdev= 9.36 00:12:07.408 clat percentiles (usec): 00:12:07.408 | 1.00th=[ 90], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 99], 00:12:07.408 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 109], 00:12:07.408 | 70.00th=[ 111], 80.00th=[ 113], 90.00th=[ 117], 95.00th=[ 120], 00:12:07.408 | 99.00th=[ 127], 99.50th=[ 129], 99.90th=[ 135], 99.95th=[ 141], 00:12:07.408 | 99.99th=[ 153] 00:12:07.408 write: IOPS=4394, BW=17.2MiB/s (18.0MB/s)(17.2MiB/1001msec); 0 zone resets 00:12:07.408 slat (nsec): min=5686, max=37527, avg=12889.78, stdev=4212.84 00:12:07.408 clat (usec): min=77, max=131, avg=97.86, stdev= 7.42 00:12:07.408 lat (usec): min=84, max=151, avg=110.75, stdev= 9.25 00:12:07.408 clat percentiles (usec): 00:12:07.408 | 1.00th=[ 83], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 92], 00:12:07.408 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 98], 60.00th=[ 99], 00:12:07.408 | 70.00th=[ 101], 80.00th=[ 104], 90.00th=[ 108], 95.00th=[ 111], 00:12:07.408 | 99.00th=[ 119], 99.50th=[ 122], 99.90th=[ 129], 99.95th=[ 130], 00:12:07.408 | 99.99th=[ 133] 00:12:07.408 bw ( KiB/s): min=17648, max=17648, per=26.52%, avg=17648.00, stdev= 0.00, samples=1 00:12:07.408 iops : min= 4412, max= 4412, avg=4412.00, stdev= 0.00, samples=1 00:12:07.408 lat (usec) : 100=44.37%, 250=55.63% 00:12:07.409 cpu : usr=5.60%, sys=11.10%, ctx=8495, majf=0, minf=2 00:12:07.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.409 issued rwts: total=4096,4399,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.409 job3: (groupid=0, jobs=1): err= 0: pid=1991571: Thu Apr 18 17:28:33 2024 00:12:07.409 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:12:07.409 slat (nsec): min=5529, max=39072, avg=12153.43, stdev=3886.48 00:12:07.409 clat (usec): min=87, max=291, avg=145.62, stdev=29.38 00:12:07.409 lat (usec): min=94, max=304, avg=157.77, stdev=31.58 00:12:07.409 clat percentiles (usec): 00:12:07.409 | 1.00th=[ 94], 5.00th=[ 98], 10.00th=[ 102], 20.00th=[ 114], 00:12:07.409 | 30.00th=[ 135], 40.00th=[ 141], 50.00th=[ 147], 60.00th=[ 153], 00:12:07.409 | 70.00th=[ 161], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 192], 00:12:07.409 | 99.00th=[ 206], 99.50th=[ 210], 99.90th=[ 227], 99.95th=[ 241], 00:12:07.409 | 99.99th=[ 293] 00:12:07.409 write: IOPS=3330, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1001msec); 0 zone resets 00:12:07.409 slat (nsec): min=5728, max=46190, avg=13065.27, stdev=5188.47 00:12:07.409 clat (usec): min=80, max=260, avg=135.07, stdev=29.68 00:12:07.409 lat (usec): min=90, max=275, avg=148.13, stdev=33.29 00:12:07.409 clat percentiles (usec): 00:12:07.409 | 1.00th=[ 87], 5.00th=[ 92], 10.00th=[ 95], 20.00th=[ 101], 00:12:07.409 | 30.00th=[ 119], 40.00th=[ 131], 50.00th=[ 137], 60.00th=[ 143], 00:12:07.409 | 70.00th=[ 151], 80.00th=[ 161], 90.00th=[ 178], 95.00th=[ 184], 00:12:07.409 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 221], 99.95th=[ 239], 00:12:07.409 | 99.99th=[ 262] 00:12:07.409 bw ( KiB/s): min=15352, max=15352, per=23.07%, avg=15352.00, stdev= 0.00, samples=1 00:12:07.409 iops : min= 3838, max= 3838, avg=3838.00, stdev= 0.00, samples=1 00:12:07.409 lat (usec) : 100=13.49%, 250=86.48%, 500=0.03% 00:12:07.409 cpu : usr=4.20%, sys=8.40%, ctx=6406, majf=0, minf=1 00:12:07.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.409 issued rwts: total=3072,3334,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.409 00:12:07.409 Run status group 0 (all jobs): 00:12:07.409 READ: bw=59.9MiB/s (62.9MB/s), 12.0MiB/s-18.0MiB/s (12.6MB/s-18.9MB/s), io=60.0MiB (62.9MB), run=1001-1001msec 00:12:07.409 WRITE: bw=65.0MiB/s (68.1MB/s), 13.0MiB/s-19.6MiB/s (13.6MB/s-20.6MB/s), io=65.1MiB (68.2MB), run=1001-1001msec 00:12:07.409 00:12:07.409 Disk stats (read/write): 00:12:07.409 nvme0n1: ios=4117/4096, merge=0/0, ticks=349/326, in_queue=675, util=86.47% 00:12:07.409 nvme0n2: ios=3072/3304, merge=0/0, ticks=393/392, in_queue=785, util=86.76% 00:12:07.409 nvme0n3: ios=3584/3679, merge=0/0, ticks=379/326, in_queue=705, util=89.03% 00:12:07.409 nvme0n4: ios=2560/2996, merge=0/0, ticks=360/383, in_queue=743, util=89.68% 00:12:07.409 17:28:33 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:07.409 [global] 00:12:07.409 thread=1 00:12:07.409 invalidate=1 00:12:07.409 rw=randwrite 00:12:07.409 time_based=1 00:12:07.409 runtime=1 00:12:07.409 ioengine=libaio 00:12:07.409 direct=1 00:12:07.409 bs=4096 00:12:07.409 iodepth=1 00:12:07.409 norandommap=0 00:12:07.409 numjobs=1 00:12:07.409 00:12:07.409 verify_dump=1 00:12:07.409 verify_backlog=512 00:12:07.409 verify_state_save=0 00:12:07.409 do_verify=1 00:12:07.409 verify=crc32c-intel 00:12:07.409 [job0] 00:12:07.409 filename=/dev/nvme0n1 00:12:07.409 [job1] 00:12:07.409 filename=/dev/nvme0n2 00:12:07.409 [job2] 00:12:07.409 filename=/dev/nvme0n3 00:12:07.409 [job3] 00:12:07.409 filename=/dev/nvme0n4 00:12:07.409 Could not set queue depth (nvme0n1) 00:12:07.409 Could not set queue depth (nvme0n2) 00:12:07.409 Could not set queue depth (nvme0n3) 00:12:07.409 Could not set queue depth (nvme0n4) 00:12:07.667 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:07.667 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:07.667 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:07.667 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:07.667 fio-3.35 00:12:07.667 Starting 4 threads 00:12:09.044 00:12:09.044 job0: (groupid=0, jobs=1): err= 0: pid=1991919: Thu Apr 18 17:28:35 2024 00:12:09.044 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:12:09.044 slat (nsec): min=5460, max=36859, avg=11200.34, stdev=3599.50 00:12:09.044 clat (usec): min=69, max=200, avg=96.75, stdev=18.66 00:12:09.044 lat (usec): min=76, max=213, avg=107.96, stdev=20.17 00:12:09.044 clat percentiles (usec): 00:12:09.044 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 84], 00:12:09.044 | 30.00th=[ 86], 40.00th=[ 89], 50.00th=[ 92], 60.00th=[ 94], 00:12:09.044 | 70.00th=[ 98], 80.00th=[ 108], 90.00th=[ 127], 95.00th=[ 137], 00:12:09.044 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 178], 99.95th=[ 184], 00:12:09.044 | 99.99th=[ 200] 00:12:09.044 write: IOPS=4847, BW=18.9MiB/s (19.9MB/s)(19.0MiB/1001msec); 0 zone resets 00:12:09.044 slat (nsec): min=5547, max=39821, avg=11344.00, stdev=4186.12 00:12:09.044 clat (usec): min=62, max=417, avg=86.25, stdev=16.88 00:12:09.044 lat (usec): min=69, max=424, avg=97.60, stdev=18.50 00:12:09.044 clat percentiles (usec): 00:12:09.044 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 76], 00:12:09.044 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 85], 00:12:09.044 | 70.00th=[ 88], 80.00th=[ 92], 90.00th=[ 106], 95.00th=[ 122], 00:12:09.044 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 200], 99.95th=[ 204], 00:12:09.044 | 99.99th=[ 416] 00:12:09.044 bw ( KiB/s): min=19912, max=19912, per=28.07%, avg=19912.00, stdev= 0.00, samples=1 00:12:09.044 iops : min= 4978, max= 4978, avg=4978.00, stdev= 0.00, samples=1 00:12:09.044 lat (usec) : 100=80.43%, 250=19.56%, 500=0.01% 00:12:09.044 cpu : usr=5.80%, sys=10.90%, ctx=9460, majf=0, minf=1 00:12:09.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:09.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.044 issued rwts: total=4608,4852,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:09.044 job1: (groupid=0, jobs=1): err= 0: pid=1991920: Thu Apr 18 17:28:35 2024 00:12:09.044 read: IOPS=4677, BW=18.3MiB/s (19.2MB/s)(18.3MiB/1001msec) 00:12:09.044 slat (nsec): min=5409, max=36015, avg=11220.52, stdev=3587.19 00:12:09.044 clat (usec): min=69, max=201, avg=93.17, stdev=19.19 00:12:09.044 lat (usec): min=74, max=214, avg=104.40, stdev=20.72 00:12:09.044 clat percentiles (usec): 00:12:09.044 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 79], 20.00th=[ 81], 00:12:09.044 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 89], 00:12:09.044 | 70.00th=[ 93], 80.00th=[ 102], 90.00th=[ 126], 95.00th=[ 137], 00:12:09.044 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 184], 99.95th=[ 198], 00:12:09.044 | 99.99th=[ 202] 00:12:09.044 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:12:09.044 slat (nsec): min=5395, max=43897, avg=11236.64, stdev=4419.66 00:12:09.044 clat (usec): min=62, max=185, avg=82.83, stdev=15.35 00:12:09.044 lat (usec): min=68, max=199, avg=94.07, stdev=17.39 00:12:09.044 clat percentiles (usec): 00:12:09.044 | 1.00th=[ 67], 5.00th=[ 70], 10.00th=[ 71], 20.00th=[ 74], 00:12:09.044 | 30.00th=[ 76], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 81], 00:12:09.044 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 98], 95.00th=[ 120], 00:12:09.044 | 99.00th=[ 149], 99.50th=[ 157], 99.90th=[ 178], 99.95th=[ 186], 00:12:09.044 | 99.99th=[ 186] 00:12:09.044 bw ( KiB/s): min=20439, max=20439, per=28.81%, avg=20439.00, stdev= 0.00, samples=1 00:12:09.044 iops : min= 5109, max= 5109, avg=5109.00, stdev= 0.00, samples=1 00:12:09.044 lat (usec) : 100=84.98%, 250=15.02% 00:12:09.044 cpu : usr=6.80%, sys=10.20%, ctx=9802, majf=0, minf=2 00:12:09.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:09.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.044 issued rwts: total=4682,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:09.044 job2: (groupid=0, jobs=1): err= 0: pid=1991921: Thu Apr 18 17:28:35 2024 00:12:09.044 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:12:09.044 slat (nsec): min=5991, max=36532, avg=12211.88, stdev=3694.67 00:12:09.044 clat (usec): min=85, max=392, avg=121.31, stdev=22.60 00:12:09.044 lat (usec): min=93, max=405, avg=133.52, stdev=22.15 00:12:09.044 clat percentiles (usec): 00:12:09.044 | 1.00th=[ 95], 5.00th=[ 99], 10.00th=[ 102], 20.00th=[ 105], 00:12:09.044 | 30.00th=[ 109], 40.00th=[ 111], 50.00th=[ 114], 60.00th=[ 118], 00:12:09.044 | 70.00th=[ 125], 80.00th=[ 137], 90.00th=[ 153], 95.00th=[ 167], 00:12:09.044 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 219], 99.95th=[ 235], 00:12:09.044 | 99.99th=[ 392] 00:12:09.044 write: IOPS=3988, BW=15.6MiB/s (16.3MB/s)(15.6MiB/1001msec); 0 zone resets 00:12:09.044 slat (nsec): min=5824, max=57838, avg=12973.55, stdev=4345.80 00:12:09.044 clat (usec): min=80, max=209, avg=111.23, stdev=18.71 00:12:09.044 lat (usec): min=88, max=221, avg=124.21, stdev=18.55 00:12:09.044 clat percentiles (usec): 00:12:09.044 | 1.00th=[ 87], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 97], 00:12:09.044 | 30.00th=[ 100], 40.00th=[ 102], 50.00th=[ 105], 60.00th=[ 110], 00:12:09.044 | 70.00th=[ 117], 80.00th=[ 127], 90.00th=[ 139], 95.00th=[ 149], 00:12:09.044 | 99.00th=[ 172], 99.50th=[ 182], 99.90th=[ 204], 99.95th=[ 208], 00:12:09.044 | 99.99th=[ 210] 00:12:09.044 bw ( KiB/s): min=16351, max=16351, per=23.05%, avg=16351.00, stdev= 0.00, samples=1 00:12:09.044 iops : min= 4087, max= 4087, avg=4087.00, stdev= 0.00, samples=1 00:12:09.044 lat (usec) : 100=20.18%, 250=79.80%, 500=0.01% 00:12:09.044 cpu : usr=5.70%, sys=9.10%, ctx=7577, majf=0, minf=1 00:12:09.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:09.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.044 issued rwts: total=3584,3992,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.044 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:09.044 job3: (groupid=0, jobs=1): err= 0: pid=1991922: Thu Apr 18 17:28:35 2024 00:12:09.044 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:12:09.044 slat (nsec): min=6009, max=40166, avg=12733.81, stdev=4164.91 00:12:09.044 clat (usec): min=84, max=243, avg=122.18, stdev=20.07 00:12:09.044 lat (usec): min=92, max=255, avg=134.92, stdev=19.80 00:12:09.044 clat percentiles (usec): 00:12:09.044 | 1.00th=[ 96], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 108], 00:12:09.044 | 30.00th=[ 112], 40.00th=[ 114], 50.00th=[ 117], 60.00th=[ 121], 00:12:09.044 | 70.00th=[ 126], 80.00th=[ 135], 90.00th=[ 149], 95.00th=[ 165], 00:12:09.044 | 99.00th=[ 194], 99.50th=[ 204], 99.90th=[ 223], 99.95th=[ 237], 00:12:09.045 | 99.99th=[ 243] 00:12:09.045 write: IOPS=3783, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1001msec); 0 zone resets 00:12:09.045 slat (nsec): min=5917, max=64081, avg=12905.31, stdev=4664.45 00:12:09.045 clat (usec): min=84, max=222, avg=116.56, stdev=18.88 00:12:09.045 lat (usec): min=93, max=238, avg=129.46, stdev=18.29 00:12:09.045 clat percentiles (usec): 00:12:09.045 | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 101], 00:12:09.045 | 30.00th=[ 104], 40.00th=[ 108], 50.00th=[ 112], 60.00th=[ 117], 00:12:09.045 | 70.00th=[ 125], 80.00th=[ 133], 90.00th=[ 143], 95.00th=[ 153], 00:12:09.045 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 204], 99.95th=[ 221], 00:12:09.045 | 99.99th=[ 223] 00:12:09.045 bw ( KiB/s): min=16351, max=16351, per=23.05%, avg=16351.00, stdev= 0.00, samples=1 00:12:09.045 iops : min= 4087, max= 4087, avg=4087.00, stdev= 0.00, samples=1 00:12:09.045 lat (usec) : 100=11.72%, 250=88.28% 00:12:09.045 cpu : usr=5.50%, sys=9.30%, ctx=7372, majf=0, minf=1 00:12:09.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:09.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.045 issued rwts: total=3584,3787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:09.045 00:12:09.045 Run status group 0 (all jobs): 00:12:09.045 READ: bw=64.2MiB/s (67.3MB/s), 14.0MiB/s-18.3MiB/s (14.7MB/s-19.2MB/s), io=64.3MiB (67.4MB), run=1001-1001msec 00:12:09.045 WRITE: bw=69.3MiB/s (72.6MB/s), 14.8MiB/s-20.0MiB/s (15.5MB/s-20.9MB/s), io=69.3MiB (72.7MB), run=1001-1001msec 00:12:09.045 00:12:09.045 Disk stats (read/write): 00:12:09.045 nvme0n1: ios=3817/4096, merge=0/0, ticks=380/354, in_queue=734, util=86.37% 00:12:09.045 nvme0n2: ios=4033/4096, merge=0/0, ticks=375/349, in_queue=724, util=86.60% 00:12:09.045 nvme0n3: ios=3072/3393, merge=0/0, ticks=360/385, in_queue=745, util=89.04% 00:12:09.045 nvme0n4: ios=3072/3242, merge=0/0, ticks=364/370, in_queue=734, util=89.70% 00:12:09.045 17:28:35 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:09.045 [global] 00:12:09.045 thread=1 00:12:09.045 invalidate=1 00:12:09.045 rw=write 00:12:09.045 time_based=1 00:12:09.045 runtime=1 00:12:09.045 ioengine=libaio 00:12:09.045 direct=1 00:12:09.045 bs=4096 00:12:09.045 iodepth=128 00:12:09.045 norandommap=0 00:12:09.045 numjobs=1 00:12:09.045 00:12:09.045 verify_dump=1 00:12:09.045 verify_backlog=512 00:12:09.045 verify_state_save=0 00:12:09.045 do_verify=1 00:12:09.045 verify=crc32c-intel 00:12:09.045 [job0] 00:12:09.045 filename=/dev/nvme0n1 00:12:09.045 [job1] 00:12:09.045 filename=/dev/nvme0n2 00:12:09.045 [job2] 00:12:09.045 filename=/dev/nvme0n3 00:12:09.045 [job3] 00:12:09.045 filename=/dev/nvme0n4 00:12:09.045 Could not set queue depth (nvme0n1) 00:12:09.045 Could not set queue depth (nvme0n2) 00:12:09.045 Could not set queue depth (nvme0n3) 00:12:09.045 Could not set queue depth (nvme0n4) 00:12:09.045 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:09.045 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:09.045 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:09.045 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:09.045 fio-3.35 00:12:09.045 Starting 4 threads 00:12:10.419 00:12:10.419 job0: (groupid=0, jobs=1): err= 0: pid=1992148: Thu Apr 18 17:28:36 2024 00:12:10.419 read: IOPS=5673, BW=22.2MiB/s (23.2MB/s)(22.2MiB/1003msec) 00:12:10.419 slat (usec): min=2, max=4440, avg=80.35, stdev=322.74 00:12:10.419 clat (usec): min=1488, max=21536, avg=11133.48, stdev=4513.92 00:12:10.419 lat (usec): min=2764, max=21836, avg=11213.83, stdev=4537.77 00:12:10.419 clat percentiles (usec): 00:12:10.419 | 1.00th=[ 4948], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7242], 00:12:10.419 | 30.00th=[ 7570], 40.00th=[ 7898], 50.00th=[ 9110], 60.00th=[11863], 00:12:10.419 | 70.00th=[13698], 80.00th=[15795], 90.00th=[18220], 95.00th=[20055], 00:12:10.419 | 99.00th=[20579], 99.50th=[20579], 99.90th=[20579], 99.95th=[20579], 00:12:10.419 | 99.99th=[21627] 00:12:10.419 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:12:10.419 slat (usec): min=3, max=4393, avg=80.79, stdev=331.21 00:12:10.419 clat (usec): min=3965, max=20102, avg=10351.69, stdev=4361.88 00:12:10.419 lat (usec): min=3977, max=20114, avg=10432.48, stdev=4386.05 00:12:10.419 clat percentiles (usec): 00:12:10.419 | 1.00th=[ 4621], 5.00th=[ 5997], 10.00th=[ 6390], 20.00th=[ 6783], 00:12:10.419 | 30.00th=[ 7046], 40.00th=[ 7439], 50.00th=[ 8160], 60.00th=[10028], 00:12:10.419 | 70.00th=[12649], 80.00th=[14353], 90.00th=[16909], 95.00th=[19530], 00:12:10.419 | 99.00th=[19792], 99.50th=[19792], 99.90th=[20055], 99.95th=[20055], 00:12:10.419 | 99.99th=[20055] 00:12:10.419 bw ( KiB/s): min=19936, max=28672, per=26.21%, avg=24304.00, stdev=6177.28, samples=2 00:12:10.419 iops : min= 4984, max= 7168, avg=6076.00, stdev=1544.32, samples=2 00:12:10.419 lat (msec) : 2=0.01%, 4=0.28%, 10=57.23%, 20=39.57%, 50=2.92% 00:12:10.419 cpu : usr=6.39%, sys=8.58%, ctx=1073, majf=0, minf=1 00:12:10.419 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:10.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.419 issued rwts: total=5691,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.419 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.419 job1: (groupid=0, jobs=1): err= 0: pid=1992149: Thu Apr 18 17:28:36 2024 00:12:10.420 read: IOPS=5163, BW=20.2MiB/s (21.1MB/s)(20.2MiB/1003msec) 00:12:10.420 slat (usec): min=2, max=4576, avg=94.69, stdev=401.95 00:12:10.420 clat (usec): min=2020, max=20703, avg=12770.36, stdev=4477.57 00:12:10.420 lat (usec): min=2029, max=20713, avg=12865.05, stdev=4493.36 00:12:10.420 clat percentiles (usec): 00:12:10.420 | 1.00th=[ 5473], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 7701], 00:12:10.420 | 30.00th=[ 9241], 40.00th=[11207], 50.00th=[12911], 60.00th=[14091], 00:12:10.420 | 70.00th=[15401], 80.00th=[16909], 90.00th=[19530], 95.00th=[20055], 00:12:10.420 | 99.00th=[20579], 99.50th=[20579], 99.90th=[20579], 99.95th=[20579], 00:12:10.420 | 99.99th=[20579] 00:12:10.420 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:12:10.420 slat (usec): min=3, max=6150, avg=83.15, stdev=388.48 00:12:10.420 clat (usec): min=3710, max=22258, avg=10775.92, stdev=4304.98 00:12:10.420 lat (usec): min=3722, max=23500, avg=10859.07, stdev=4327.40 00:12:10.420 clat percentiles (usec): 00:12:10.420 | 1.00th=[ 4883], 5.00th=[ 6063], 10.00th=[ 6521], 20.00th=[ 6783], 00:12:10.420 | 30.00th=[ 7242], 40.00th=[ 8225], 50.00th=[ 9765], 60.00th=[11207], 00:12:10.420 | 70.00th=[12649], 80.00th=[14615], 90.00th=[17957], 95.00th=[19530], 00:12:10.420 | 99.00th=[20055], 99.50th=[20841], 99.90th=[20841], 99.95th=[22152], 00:12:10.420 | 99.99th=[22152] 00:12:10.420 bw ( KiB/s): min=18808, max=25704, per=24.00%, avg=22256.00, stdev=4876.21, samples=2 00:12:10.420 iops : min= 4702, max= 6426, avg=5564.00, stdev=1219.05, samples=2 00:12:10.420 lat (msec) : 4=0.14%, 10=42.71%, 20=53.53%, 50=3.63% 00:12:10.420 cpu : usr=5.19%, sys=8.08%, ctx=1034, majf=0, minf=1 00:12:10.420 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:10.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.420 issued rwts: total=5179,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.420 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.420 job2: (groupid=0, jobs=1): err= 0: pid=1992150: Thu Apr 18 17:28:36 2024 00:12:10.420 read: IOPS=4127, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1001msec) 00:12:10.420 slat (usec): min=3, max=4683, avg=119.92, stdev=459.12 00:12:10.420 clat (usec): min=807, max=20624, avg=15519.83, stdev=3763.48 00:12:10.420 lat (usec): min=1553, max=20665, avg=15639.75, stdev=3764.33 00:12:10.420 clat percentiles (usec): 00:12:10.420 | 1.00th=[ 4817], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[12125], 00:12:10.420 | 30.00th=[14091], 40.00th=[15401], 50.00th=[16581], 60.00th=[17171], 00:12:10.420 | 70.00th=[17695], 80.00th=[19268], 90.00th=[19792], 95.00th=[20317], 00:12:10.420 | 99.00th=[20579], 99.50th=[20579], 99.90th=[20579], 99.95th=[20579], 00:12:10.420 | 99.99th=[20579] 00:12:10.420 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:12:10.420 slat (usec): min=3, max=4872, avg=101.51, stdev=398.24 00:12:10.420 clat (usec): min=3986, max=23415, avg=13569.92, stdev=3965.78 00:12:10.420 lat (usec): min=3997, max=23419, avg=13671.43, stdev=3979.07 00:12:10.420 clat percentiles (usec): 00:12:10.420 | 1.00th=[ 6063], 5.00th=[ 7898], 10.00th=[ 8225], 20.00th=[ 8979], 00:12:10.420 | 30.00th=[10159], 40.00th=[12780], 50.00th=[15008], 60.00th=[15926], 00:12:10.420 | 70.00th=[16319], 80.00th=[16909], 90.00th=[18220], 95.00th=[19006], 00:12:10.420 | 99.00th=[21103], 99.50th=[22414], 99.90th=[23462], 99.95th=[23462], 00:12:10.420 | 99.99th=[23462] 00:12:10.420 bw ( KiB/s): min=19096, max=19096, per=20.59%, avg=19096.00, stdev= 0.00, samples=1 00:12:10.420 iops : min= 4774, max= 4774, avg=4774.00, stdev= 0.00, samples=1 00:12:10.420 lat (usec) : 1000=0.01% 00:12:10.420 lat (msec) : 2=0.21%, 4=0.21%, 10=20.41%, 20=74.35%, 50=4.82% 00:12:10.420 cpu : usr=4.20%, sys=7.60%, ctx=835, majf=0, minf=1 00:12:10.420 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:10.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.420 issued rwts: total=4132,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.420 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.420 job3: (groupid=0, jobs=1): err= 0: pid=1992151: Thu Apr 18 17:28:36 2024 00:12:10.420 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:12:10.420 slat (usec): min=3, max=5656, avg=72.46, stdev=278.05 00:12:10.420 clat (usec): min=3686, max=19117, avg=9730.39, stdev=2634.65 00:12:10.420 lat (usec): min=4248, max=19130, avg=9802.85, stdev=2647.26 00:12:10.420 clat percentiles (usec): 00:12:10.420 | 1.00th=[ 5604], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 7898], 00:12:10.420 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9503], 00:12:10.420 | 70.00th=[ 9765], 80.00th=[10683], 90.00th=[13829], 95.00th=[16319], 00:12:10.420 | 99.00th=[17695], 99.50th=[18220], 99.90th=[18220], 99.95th=[18220], 00:12:10.420 | 99.99th=[19006] 00:12:10.420 write: IOPS=6853, BW=26.8MiB/s (28.1MB/s)(26.8MiB/1002msec); 0 zone resets 00:12:10.420 slat (usec): min=3, max=4782, avg=67.17, stdev=242.34 00:12:10.420 clat (usec): min=1541, max=17509, avg=9043.49, stdev=2491.67 00:12:10.420 lat (usec): min=1787, max=17539, avg=9110.66, stdev=2500.05 00:12:10.420 clat percentiles (usec): 00:12:10.420 | 1.00th=[ 4621], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 7242], 00:12:10.420 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8979], 00:12:10.420 | 70.00th=[ 9110], 80.00th=[10159], 90.00th=[13304], 95.00th=[14222], 00:12:10.420 | 99.00th=[16712], 99.50th=[16909], 99.90th=[16909], 99.95th=[17171], 00:12:10.420 | 99.99th=[17433] 00:12:10.420 bw ( KiB/s): min=24576, max=29344, per=29.07%, avg=26960.00, stdev=3371.49, samples=2 00:12:10.420 iops : min= 6144, max= 7336, avg=6740.00, stdev=842.87, samples=2 00:12:10.420 lat (msec) : 2=0.12%, 4=0.21%, 10=77.53%, 20=22.14% 00:12:10.420 cpu : usr=5.79%, sys=12.09%, ctx=974, majf=0, minf=1 00:12:10.420 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:10.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.420 issued rwts: total=6656,6867,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.420 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.420 00:12:10.420 Run status group 0 (all jobs): 00:12:10.420 READ: bw=84.3MiB/s (88.4MB/s), 16.1MiB/s-25.9MiB/s (16.9MB/s-27.2MB/s), io=84.6MiB (88.7MB), run=1001-1003msec 00:12:10.420 WRITE: bw=90.6MiB/s (95.0MB/s), 18.0MiB/s-26.8MiB/s (18.9MB/s-28.1MB/s), io=90.8MiB (95.2MB), run=1001-1003msec 00:12:10.420 00:12:10.420 Disk stats (read/write): 00:12:10.420 nvme0n1: ios=5170/5321, merge=0/0, ticks=13544/13701, in_queue=27245, util=86.07% 00:12:10.420 nvme0n2: ios=4608/5076, merge=0/0, ticks=13877/13754, in_queue=27631, util=86.38% 00:12:10.420 nvme0n3: ios=3584/3858, merge=0/0, ticks=13490/13250, in_queue=26740, util=88.71% 00:12:10.420 nvme0n4: ios=5439/5632, merge=0/0, ticks=18090/17720, in_queue=35810, util=89.58% 00:12:10.420 17:28:36 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:10.420 [global] 00:12:10.420 thread=1 00:12:10.420 invalidate=1 00:12:10.420 rw=randwrite 00:12:10.420 time_based=1 00:12:10.420 runtime=1 00:12:10.420 ioengine=libaio 00:12:10.420 direct=1 00:12:10.420 bs=4096 00:12:10.420 iodepth=128 00:12:10.420 norandommap=0 00:12:10.420 numjobs=1 00:12:10.420 00:12:10.420 verify_dump=1 00:12:10.420 verify_backlog=512 00:12:10.420 verify_state_save=0 00:12:10.420 do_verify=1 00:12:10.420 verify=crc32c-intel 00:12:10.420 [job0] 00:12:10.420 filename=/dev/nvme0n1 00:12:10.420 [job1] 00:12:10.420 filename=/dev/nvme0n2 00:12:10.420 [job2] 00:12:10.420 filename=/dev/nvme0n3 00:12:10.420 [job3] 00:12:10.420 filename=/dev/nvme0n4 00:12:10.420 Could not set queue depth (nvme0n1) 00:12:10.420 Could not set queue depth (nvme0n2) 00:12:10.420 Could not set queue depth (nvme0n3) 00:12:10.420 Could not set queue depth (nvme0n4) 00:12:10.420 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:10.420 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:10.420 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:10.420 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:10.420 fio-3.35 00:12:10.420 Starting 4 threads 00:12:11.822 00:12:11.822 job0: (groupid=0, jobs=1): err= 0: pid=1992383: Thu Apr 18 17:28:38 2024 00:12:11.822 read: IOPS=3995, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1006msec) 00:12:11.822 slat (usec): min=2, max=12018, avg=124.46, stdev=762.31 00:12:11.822 clat (usec): min=1573, max=44112, avg=15858.77, stdev=10602.76 00:12:11.822 lat (usec): min=3106, max=44161, avg=15983.23, stdev=10693.02 00:12:11.822 clat percentiles (usec): 00:12:11.822 | 1.00th=[ 3523], 5.00th=[ 4883], 10.00th=[ 5604], 20.00th=[ 6259], 00:12:11.822 | 30.00th=[ 6521], 40.00th=[ 7832], 50.00th=[12518], 60.00th=[15139], 00:12:11.822 | 70.00th=[23462], 80.00th=[28443], 90.00th=[32375], 95.00th=[33817], 00:12:11.822 | 99.00th=[35390], 99.50th=[36963], 99.90th=[41681], 99.95th=[43254], 00:12:11.822 | 99.99th=[44303] 00:12:11.822 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:12:11.822 slat (usec): min=3, max=12816, avg=115.06, stdev=730.66 00:12:11.822 clat (usec): min=2670, max=42091, avg=15416.18, stdev=10987.83 00:12:11.822 lat (usec): min=2676, max=42127, avg=15531.25, stdev=11070.95 00:12:11.822 clat percentiles (usec): 00:12:11.822 | 1.00th=[ 3163], 5.00th=[ 3916], 10.00th=[ 5014], 20.00th=[ 5604], 00:12:11.822 | 30.00th=[ 5800], 40.00th=[ 6063], 50.00th=[10814], 60.00th=[16909], 00:12:11.822 | 70.00th=[25560], 80.00th=[28705], 90.00th=[31589], 95.00th=[32375], 00:12:11.822 | 99.00th=[33817], 99.50th=[34341], 99.90th=[39060], 99.95th=[40109], 00:12:11.822 | 99.99th=[42206] 00:12:11.822 bw ( KiB/s): min=16384, max=16384, per=19.96%, avg=16384.00, stdev= 0.00, samples=2 00:12:11.822 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:12:11.822 lat (msec) : 2=0.01%, 4=4.07%, 10=43.73%, 20=16.70%, 50=35.49% 00:12:11.822 cpu : usr=3.78%, sys=5.87%, ctx=736, majf=0, minf=1 00:12:11.822 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:11.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:11.822 issued rwts: total=4019,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.822 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:11.822 job1: (groupid=0, jobs=1): err= 0: pid=1992384: Thu Apr 18 17:28:38 2024 00:12:11.822 read: IOPS=6217, BW=24.3MiB/s (25.5MB/s)(24.5MiB/1008msec) 00:12:11.822 slat (usec): min=3, max=4595, avg=76.95, stdev=326.56 00:12:11.822 clat (usec): min=3466, max=17506, avg=10332.80, stdev=1823.29 00:12:11.822 lat (usec): min=3472, max=17520, avg=10409.74, stdev=1852.82 00:12:11.822 clat percentiles (usec): 00:12:11.822 | 1.00th=[ 6587], 5.00th=[ 7832], 10.00th=[ 7963], 20.00th=[ 8455], 00:12:11.822 | 30.00th=[ 8979], 40.00th=[10028], 50.00th=[10683], 60.00th=[11076], 00:12:11.822 | 70.00th=[11469], 80.00th=[11863], 90.00th=[12518], 95.00th=[13042], 00:12:11.822 | 99.00th=[14353], 99.50th=[14877], 99.90th=[15926], 99.95th=[15926], 00:12:11.822 | 99.99th=[17433] 00:12:11.822 write: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec); 0 zone resets 00:12:11.822 slat (usec): min=3, max=4781, avg=72.82, stdev=301.55 00:12:11.822 clat (usec): min=2911, max=15623, avg=9484.77, stdev=1667.74 00:12:11.822 lat (usec): min=2920, max=15638, avg=9557.60, stdev=1697.16 00:12:11.822 clat percentiles (usec): 00:12:11.822 | 1.00th=[ 5604], 5.00th=[ 7046], 10.00th=[ 7373], 20.00th=[ 8029], 00:12:11.822 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9765], 60.00th=[10028], 00:12:11.822 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11600], 95.00th=[12256], 00:12:11.822 | 99.00th=[13304], 99.50th=[13829], 99.90th=[15270], 99.95th=[15533], 00:12:11.822 | 99.99th=[15664] 00:12:11.822 bw ( KiB/s): min=25800, max=27361, per=32.38%, avg=26580.50, stdev=1103.79, samples=2 00:12:11.822 iops : min= 6450, max= 6840, avg=6645.00, stdev=275.77, samples=2 00:12:11.822 lat (msec) : 4=0.28%, 10=50.62%, 20=49.11% 00:12:11.822 cpu : usr=4.37%, sys=5.76%, ctx=924, majf=0, minf=1 00:12:11.822 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:12:11.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:11.822 issued rwts: total=6267,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.822 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:11.822 job2: (groupid=0, jobs=1): err= 0: pid=1992385: Thu Apr 18 17:28:38 2024 00:12:11.822 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:12:11.822 slat (usec): min=2, max=9729, avg=92.66, stdev=538.40 00:12:11.822 clat (usec): min=3317, max=42744, avg=12213.48, stdev=6305.35 00:12:11.822 lat (usec): min=3328, max=42772, avg=12306.14, stdev=6358.94 00:12:11.822 clat percentiles (usec): 00:12:11.822 | 1.00th=[ 4178], 5.00th=[ 5407], 10.00th=[ 6390], 20.00th=[ 7570], 00:12:11.822 | 30.00th=[ 7832], 40.00th=[ 8717], 50.00th=[ 9765], 60.00th=[12256], 00:12:11.822 | 70.00th=[15139], 80.00th=[17171], 90.00th=[18744], 95.00th=[23462], 00:12:11.822 | 99.00th=[33424], 99.50th=[33424], 99.90th=[34341], 99.95th=[42206], 00:12:11.822 | 99.99th=[42730] 00:12:11.822 write: IOPS=4775, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1008msec); 0 zone resets 00:12:11.822 slat (usec): min=3, max=11798, avg=110.47, stdev=682.40 00:12:11.822 clat (usec): min=749, max=42903, avg=14796.04, stdev=8497.44 00:12:11.822 lat (usec): min=1722, max=42913, avg=14906.51, stdev=8555.37 00:12:11.822 clat percentiles (usec): 00:12:11.822 | 1.00th=[ 3523], 5.00th=[ 4817], 10.00th=[ 6063], 20.00th=[ 7111], 00:12:11.822 | 30.00th=[ 8160], 40.00th=[11207], 50.00th=[13829], 60.00th=[14877], 00:12:11.822 | 70.00th=[16909], 80.00th=[21365], 90.00th=[30802], 95.00th=[32637], 00:12:11.822 | 99.00th=[33817], 99.50th=[34341], 99.90th=[41157], 99.95th=[42730], 00:12:11.822 | 99.99th=[42730] 00:12:11.822 bw ( KiB/s): min=17264, max=20183, per=22.81%, avg=18723.50, stdev=2064.04, samples=2 00:12:11.822 iops : min= 4316, max= 5045, avg=4680.50, stdev=515.48, samples=2 00:12:11.822 lat (usec) : 750=0.01% 00:12:11.822 lat (msec) : 4=1.35%, 10=42.51%, 20=40.21%, 50=15.92% 00:12:11.822 cpu : usr=4.67%, sys=8.14%, ctx=825, majf=0, minf=1 00:12:11.822 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:11.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:11.822 issued rwts: total=4608,4814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.822 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:11.822 job3: (groupid=0, jobs=1): err= 0: pid=1992386: Thu Apr 18 17:28:38 2024 00:12:11.822 read: IOPS=4714, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1005msec) 00:12:11.822 slat (usec): min=3, max=10966, avg=96.99, stdev=510.46 00:12:11.822 clat (usec): min=3532, max=49383, avg=13620.98, stdev=7899.33 00:12:11.822 lat (usec): min=4221, max=49405, avg=13717.96, stdev=7952.76 00:12:11.822 clat percentiles (usec): 00:12:11.822 | 1.00th=[ 5800], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 7963], 00:12:11.822 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[12125], 00:12:11.822 | 70.00th=[15401], 80.00th=[17171], 90.00th=[26870], 95.00th=[32113], 00:12:11.822 | 99.00th=[37487], 99.50th=[38011], 99.90th=[41157], 99.95th=[42730], 00:12:11.822 | 99.99th=[49546] 00:12:11.822 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:12:11.822 slat (usec): min=3, max=10522, avg=96.82, stdev=507.52 00:12:11.822 clat (usec): min=2344, max=41441, avg=12306.38, stdev=7236.58 00:12:11.822 lat (usec): min=2351, max=41454, avg=12403.20, stdev=7299.22 00:12:11.822 clat percentiles (usec): 00:12:11.822 | 1.00th=[ 4424], 5.00th=[ 6063], 10.00th=[ 6980], 20.00th=[ 7308], 00:12:11.822 | 30.00th=[ 7767], 40.00th=[ 8356], 50.00th=[11076], 60.00th=[11863], 00:12:11.822 | 70.00th=[12649], 80.00th=[14484], 90.00th=[22414], 95.00th=[31851], 00:12:11.822 | 99.00th=[33817], 99.50th=[38536], 99.90th=[39060], 99.95th=[41157], 00:12:11.822 | 99.99th=[41681] 00:12:11.822 bw ( KiB/s): min=16384, max=24576, per=24.95%, avg=20480.00, stdev=5792.62, samples=2 00:12:11.822 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:12:11.822 lat (msec) : 4=0.40%, 10=48.46%, 20=36.95%, 50=14.19% 00:12:11.822 cpu : usr=4.28%, sys=9.46%, ctx=871, majf=0, minf=1 00:12:11.822 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:11.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:11.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:11.822 issued rwts: total=4738,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:11.822 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:11.822 00:12:11.822 Run status group 0 (all jobs): 00:12:11.822 READ: bw=76.1MiB/s (79.8MB/s), 15.6MiB/s-24.3MiB/s (16.4MB/s-25.5MB/s), io=76.7MiB (80.4MB), run=1005-1008msec 00:12:11.822 WRITE: bw=80.2MiB/s (84.1MB/s), 15.9MiB/s-25.8MiB/s (16.7MB/s-27.0MB/s), io=80.8MiB (84.7MB), run=1005-1008msec 00:12:11.822 00:12:11.822 Disk stats (read/write): 00:12:11.822 nvme0n1: ios=3634/3783, merge=0/0, ticks=20234/19142, in_queue=39376, util=84.87% 00:12:11.822 nvme0n2: ios=5120/5626, merge=0/0, ticks=52350/53181, in_queue=105531, util=86.70% 00:12:11.822 nvme0n3: ios=3961/4096, merge=0/0, ticks=20650/22099, in_queue=42749, util=87.71% 00:12:11.822 nvme0n4: ios=4096/4335, merge=0/0, ticks=17679/19853, in_queue=37532, util=88.67% 00:12:11.822 17:28:38 -- target/fio.sh@55 -- # sync 00:12:11.822 17:28:38 -- target/fio.sh@59 -- # fio_pid=1992526 00:12:11.822 17:28:38 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:11.822 17:28:38 -- target/fio.sh@61 -- # sleep 3 00:12:11.822 [global] 00:12:11.822 thread=1 00:12:11.822 invalidate=1 00:12:11.822 rw=read 00:12:11.822 time_based=1 00:12:11.822 runtime=10 00:12:11.822 ioengine=libaio 00:12:11.822 direct=1 00:12:11.822 bs=4096 00:12:11.822 iodepth=1 00:12:11.822 norandommap=1 00:12:11.822 numjobs=1 00:12:11.822 00:12:11.822 [job0] 00:12:11.822 filename=/dev/nvme0n1 00:12:11.822 [job1] 00:12:11.822 filename=/dev/nvme0n2 00:12:11.822 [job2] 00:12:11.822 filename=/dev/nvme0n3 00:12:11.822 [job3] 00:12:11.822 filename=/dev/nvme0n4 00:12:11.822 Could not set queue depth (nvme0n1) 00:12:11.822 Could not set queue depth (nvme0n2) 00:12:11.822 Could not set queue depth (nvme0n3) 00:12:11.822 Could not set queue depth (nvme0n4) 00:12:12.079 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:12.079 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:12.079 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:12.079 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:12.079 fio-3.35 00:12:12.079 Starting 4 threads 00:12:15.363 17:28:41 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:15.363 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=83345408, buflen=4096 00:12:15.363 fio: pid=1992620, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:15.363 17:28:41 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:15.363 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=82948096, buflen=4096 00:12:15.363 fio: pid=1992619, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:15.363 17:28:41 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:15.363 17:28:41 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:15.621 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=31084544, buflen=4096 00:12:15.621 fio: pid=1992617, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:15.621 17:28:41 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:15.621 17:28:41 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:15.879 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=63815680, buflen=4096 00:12:15.879 fio: pid=1992618, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:12:15.879 17:28:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:15.879 17:28:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:15.879 00:12:15.879 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1992617: Thu Apr 18 17:28:42 2024 00:12:15.879 read: IOPS=7063, BW=27.6MiB/s (28.9MB/s)(93.6MiB/3394msec) 00:12:15.879 slat (usec): min=4, max=13326, avg=11.26, stdev=134.95 00:12:15.879 clat (usec): min=55, max=449, avg=128.40, stdev=34.60 00:12:15.879 lat (usec): min=60, max=13423, avg=139.66, stdev=139.66 00:12:15.879 clat percentiles (usec): 00:12:15.879 | 1.00th=[ 68], 5.00th=[ 81], 10.00th=[ 85], 20.00th=[ 92], 00:12:15.879 | 30.00th=[ 99], 40.00th=[ 123], 50.00th=[ 133], 60.00th=[ 141], 00:12:15.879 | 70.00th=[ 147], 80.00th=[ 157], 90.00th=[ 174], 95.00th=[ 186], 00:12:15.879 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 247], 99.95th=[ 277], 00:12:15.879 | 99.99th=[ 326] 00:12:15.879 bw ( KiB/s): min=23840, max=32136, per=26.42%, avg=27813.33, stdev=3298.55, samples=6 00:12:15.879 iops : min= 5960, max= 8034, avg=6953.33, stdev=824.64, samples=6 00:12:15.879 lat (usec) : 100=30.66%, 250=69.25%, 500=0.09% 00:12:15.879 cpu : usr=2.77%, sys=7.40%, ctx=23979, majf=0, minf=1 00:12:15.879 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.879 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.879 issued rwts: total=23974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.879 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.879 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1992618: Thu Apr 18 17:28:42 2024 00:12:15.879 read: IOPS=8714, BW=34.0MiB/s (35.7MB/s)(125MiB/3668msec) 00:12:15.879 slat (usec): min=4, max=15830, avg=10.44, stdev=153.91 00:12:15.879 clat (usec): min=54, max=532, avg=102.67, stdev=34.97 00:12:15.879 lat (usec): min=58, max=16012, avg=113.11, stdev=158.49 00:12:15.879 clat percentiles (usec): 00:12:15.879 | 1.00th=[ 59], 5.00th=[ 63], 10.00th=[ 70], 20.00th=[ 77], 00:12:15.879 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 91], 60.00th=[ 98], 00:12:15.879 | 70.00th=[ 117], 80.00th=[ 129], 90.00th=[ 153], 95.00th=[ 176], 00:12:15.879 | 99.00th=[ 206], 99.50th=[ 221], 99.90th=[ 285], 99.95th=[ 302], 00:12:15.879 | 99.99th=[ 347] 00:12:15.879 bw ( KiB/s): min=22176, max=42848, per=32.70%, avg=34421.71, stdev=7608.43, samples=7 00:12:15.879 iops : min= 5544, max=10712, avg=8605.43, stdev=1902.11, samples=7 00:12:15.879 lat (usec) : 100=61.89%, 250=37.91%, 500=0.19%, 750=0.01% 00:12:15.879 cpu : usr=2.89%, sys=8.18%, ctx=31974, majf=0, minf=1 00:12:15.879 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.880 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.880 issued rwts: total=31965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.880 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.880 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1992619: Thu Apr 18 17:28:42 2024 00:12:15.880 read: IOPS=6439, BW=25.2MiB/s (26.4MB/s)(79.1MiB/3145msec) 00:12:15.880 slat (usec): min=4, max=11661, avg=11.03, stdev=102.85 00:12:15.880 clat (usec): min=81, max=552, avg=142.29, stdev=30.54 00:12:15.880 lat (usec): min=88, max=11848, avg=153.32, stdev=107.68 00:12:15.880 clat percentiles (usec): 00:12:15.880 | 1.00th=[ 89], 5.00th=[ 95], 10.00th=[ 99], 20.00th=[ 116], 00:12:15.880 | 30.00th=[ 129], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 149], 00:12:15.880 | 70.00th=[ 155], 80.00th=[ 167], 90.00th=[ 182], 95.00th=[ 194], 00:12:15.880 | 99.00th=[ 217], 99.50th=[ 227], 99.90th=[ 281], 99.95th=[ 297], 00:12:15.880 | 99.99th=[ 412] 00:12:15.880 bw ( KiB/s): min=20768, max=32392, per=24.49%, avg=25782.67, stdev=3892.55, samples=6 00:12:15.880 iops : min= 5192, max= 8098, avg=6445.67, stdev=973.14, samples=6 00:12:15.880 lat (usec) : 100=11.39%, 250=88.38%, 500=0.23%, 750=0.01% 00:12:15.880 cpu : usr=3.09%, sys=6.55%, ctx=20255, majf=0, minf=1 00:12:15.880 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.880 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.880 issued rwts: total=20252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.880 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.880 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1992620: Thu Apr 18 17:28:42 2024 00:12:15.880 read: IOPS=7055, BW=27.6MiB/s (28.9MB/s)(79.5MiB/2884msec) 00:12:15.880 slat (nsec): min=4752, max=58854, avg=9992.44, stdev=4115.84 00:12:15.880 clat (usec): min=78, max=384, avg=129.78, stdev=27.76 00:12:15.880 lat (usec): min=85, max=391, avg=139.77, stdev=28.27 00:12:15.880 clat percentiles (usec): 00:12:15.880 | 1.00th=[ 89], 5.00th=[ 99], 10.00th=[ 103], 20.00th=[ 110], 00:12:15.880 | 30.00th=[ 115], 40.00th=[ 120], 50.00th=[ 124], 60.00th=[ 128], 00:12:15.880 | 70.00th=[ 135], 80.00th=[ 145], 90.00th=[ 174], 95.00th=[ 190], 00:12:15.880 | 99.00th=[ 212], 99.50th=[ 231], 99.90th=[ 285], 99.95th=[ 302], 00:12:15.880 | 99.99th=[ 330] 00:12:15.880 bw ( KiB/s): min=22144, max=30968, per=26.72%, avg=28129.60, stdev=3523.39, samples=5 00:12:15.880 iops : min= 5536, max= 7742, avg=7032.40, stdev=880.85, samples=5 00:12:15.880 lat (usec) : 100=6.28%, 250=93.42%, 500=0.29% 00:12:15.880 cpu : usr=2.67%, sys=7.87%, ctx=20350, majf=0, minf=1 00:12:15.880 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:15.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.880 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:15.880 issued rwts: total=20349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:15.880 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:15.880 00:12:15.880 Run status group 0 (all jobs): 00:12:15.880 READ: bw=103MiB/s (108MB/s), 25.2MiB/s-34.0MiB/s (26.4MB/s-35.7MB/s), io=377MiB (395MB), run=2884-3668msec 00:12:15.880 00:12:15.880 Disk stats (read/write): 00:12:15.880 nvme0n1: ios=23586/0, merge=0/0, ticks=3018/0, in_queue=3018, util=94.99% 00:12:15.880 nvme0n2: ios=31107/0, merge=0/0, ticks=3196/0, in_queue=3196, util=95.05% 00:12:15.880 nvme0n3: ios=20014/0, merge=0/0, ticks=2861/0, in_queue=2861, util=96.20% 00:12:15.880 nvme0n4: ios=20165/0, merge=0/0, ticks=2598/0, in_queue=2598, util=96.75% 00:12:16.138 17:28:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:16.138 17:28:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:16.396 17:28:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:16.396 17:28:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:16.655 17:28:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:16.655 17:28:43 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:16.913 17:28:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:16.913 17:28:43 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:17.172 17:28:43 -- target/fio.sh@69 -- # fio_status=0 00:12:17.172 17:28:43 -- target/fio.sh@70 -- # wait 1992526 00:12:17.172 17:28:43 -- target/fio.sh@70 -- # fio_status=4 00:12:17.172 17:28:43 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.698 17:28:45 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.698 17:28:45 -- common/autotest_common.sh@1205 -- # local i=0 00:12:19.698 17:28:45 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:19.698 17:28:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.698 17:28:45 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:19.698 17:28:45 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.698 17:28:45 -- common/autotest_common.sh@1217 -- # return 0 00:12:19.698 17:28:45 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:19.698 17:28:45 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:19.698 nvmf hotplug test: fio failed as expected 00:12:19.698 17:28:45 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.698 17:28:46 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:19.698 17:28:46 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:19.698 17:28:46 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:19.698 17:28:46 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:19.698 17:28:46 -- target/fio.sh@91 -- # nvmftestfini 00:12:19.698 17:28:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:19.698 17:28:46 -- nvmf/common.sh@117 -- # sync 00:12:19.698 17:28:46 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:19.698 17:28:46 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:19.698 17:28:46 -- nvmf/common.sh@120 -- # set +e 00:12:19.698 17:28:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.698 17:28:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:19.698 rmmod nvme_rdma 00:12:19.698 rmmod nvme_fabrics 00:12:19.698 17:28:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.698 17:28:46 -- nvmf/common.sh@124 -- # set -e 00:12:19.698 17:28:46 -- nvmf/common.sh@125 -- # return 0 00:12:19.698 17:28:46 -- nvmf/common.sh@478 -- # '[' -n 1990210 ']' 00:12:19.698 17:28:46 -- nvmf/common.sh@479 -- # killprocess 1990210 00:12:19.698 17:28:46 -- common/autotest_common.sh@936 -- # '[' -z 1990210 ']' 00:12:19.698 17:28:46 -- common/autotest_common.sh@940 -- # kill -0 1990210 00:12:19.698 17:28:46 -- common/autotest_common.sh@941 -- # uname 00:12:19.698 17:28:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:19.698 17:28:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1990210 00:12:19.956 17:28:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:19.956 17:28:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:19.956 17:28:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1990210' 00:12:19.956 killing process with pid 1990210 00:12:19.956 17:28:46 -- common/autotest_common.sh@955 -- # kill 1990210 00:12:19.956 17:28:46 -- common/autotest_common.sh@960 -- # wait 1990210 00:12:20.214 17:28:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:20.214 17:28:46 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:20.214 00:12:20.214 real 0m26.259s 00:12:20.214 user 1m43.157s 00:12:20.214 sys 0m6.911s 00:12:20.214 17:28:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:20.214 17:28:46 -- common/autotest_common.sh@10 -- # set +x 00:12:20.214 ************************************ 00:12:20.214 END TEST nvmf_fio_target 00:12:20.214 ************************************ 00:12:20.214 17:28:46 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:20.214 17:28:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:20.214 17:28:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:20.214 17:28:46 -- common/autotest_common.sh@10 -- # set +x 00:12:20.214 ************************************ 00:12:20.214 START TEST nvmf_bdevio 00:12:20.214 ************************************ 00:12:20.214 17:28:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:12:20.472 * Looking for test storage... 00:12:20.472 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:20.472 17:28:46 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.472 17:28:46 -- nvmf/common.sh@7 -- # uname -s 00:12:20.472 17:28:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.472 17:28:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.472 17:28:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.472 17:28:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.472 17:28:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.472 17:28:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.472 17:28:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.472 17:28:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.472 17:28:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.472 17:28:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.472 17:28:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:20.472 17:28:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:20.472 17:28:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.472 17:28:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.472 17:28:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.472 17:28:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.472 17:28:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:20.472 17:28:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.472 17:28:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.472 17:28:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.472 17:28:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.472 17:28:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.472 17:28:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.472 17:28:46 -- paths/export.sh@5 -- # export PATH 00:12:20.472 17:28:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.472 17:28:46 -- nvmf/common.sh@47 -- # : 0 00:12:20.472 17:28:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:20.472 17:28:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:20.472 17:28:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.472 17:28:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.472 17:28:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.472 17:28:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:20.472 17:28:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:20.472 17:28:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:20.472 17:28:46 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:20.472 17:28:46 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:20.472 17:28:46 -- target/bdevio.sh@14 -- # nvmftestinit 00:12:20.472 17:28:46 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:20.472 17:28:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.472 17:28:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:20.472 17:28:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:20.472 17:28:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:20.472 17:28:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.472 17:28:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.472 17:28:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.472 17:28:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:20.472 17:28:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:20.472 17:28:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:20.472 17:28:46 -- common/autotest_common.sh@10 -- # set +x 00:12:22.378 17:28:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:22.378 17:28:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:22.378 17:28:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:22.378 17:28:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:22.378 17:28:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:22.378 17:28:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:22.378 17:28:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:22.378 17:28:48 -- nvmf/common.sh@295 -- # net_devs=() 00:12:22.378 17:28:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:22.378 17:28:48 -- nvmf/common.sh@296 -- # e810=() 00:12:22.378 17:28:48 -- nvmf/common.sh@296 -- # local -ga e810 00:12:22.378 17:28:48 -- nvmf/common.sh@297 -- # x722=() 00:12:22.378 17:28:48 -- nvmf/common.sh@297 -- # local -ga x722 00:12:22.378 17:28:48 -- nvmf/common.sh@298 -- # mlx=() 00:12:22.378 17:28:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:22.378 17:28:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.378 17:28:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.378 17:28:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.378 17:28:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.378 17:28:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.378 17:28:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.378 17:28:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.378 17:28:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.378 17:28:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.378 17:28:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.378 17:28:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.378 17:28:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:22.378 17:28:48 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:22.378 17:28:48 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:22.378 17:28:48 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:22.378 17:28:48 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:22.378 17:28:48 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:22.378 17:28:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:22.378 17:28:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.378 17:28:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:12:22.378 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:12:22.378 17:28:48 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:22.378 17:28:48 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:22.378 17:28:48 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:22.378 17:28:48 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:22.378 17:28:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.378 17:28:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:12:22.378 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:12:22.378 17:28:48 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:22.378 17:28:48 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:22.378 17:28:48 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:22.378 17:28:48 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:22.378 17:28:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:22.378 17:28:48 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:22.378 17:28:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.378 17:28:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.378 17:28:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:22.378 17:28:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.378 17:28:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:12:22.378 Found net devices under 0000:09:00.0: mlx_0_0 00:12:22.378 17:28:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.378 17:28:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.378 17:28:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.378 17:28:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:22.378 17:28:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.378 17:28:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:12:22.378 Found net devices under 0000:09:00.1: mlx_0_1 00:12:22.378 17:28:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.378 17:28:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:22.378 17:28:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:22.378 17:28:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:22.378 17:28:48 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:22.378 17:28:48 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:22.378 17:28:48 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:22.378 17:28:48 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:22.378 17:28:48 -- nvmf/common.sh@58 -- # uname 00:12:22.378 17:28:48 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:22.378 17:28:48 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:22.378 17:28:48 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:22.378 17:28:48 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:22.378 17:28:48 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:22.378 17:28:48 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:22.378 17:28:48 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:22.378 17:28:48 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:22.378 17:28:48 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:22.379 17:28:48 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:22.379 17:28:48 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:22.379 17:28:48 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:22.379 17:28:48 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:22.379 17:28:48 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:22.379 17:28:48 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:22.379 17:28:48 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:22.379 17:28:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:22.379 17:28:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.379 17:28:48 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:22.379 17:28:48 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:22.379 17:28:48 -- nvmf/common.sh@105 -- # continue 2 00:12:22.379 17:28:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:22.379 17:28:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.379 17:28:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:22.379 17:28:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.379 17:28:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:22.379 17:28:48 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:22.379 17:28:48 -- nvmf/common.sh@105 -- # continue 2 00:12:22.379 17:28:48 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:22.379 17:28:48 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:22.379 17:28:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:22.379 17:28:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:22.379 17:28:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:22.379 17:28:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:22.379 17:28:48 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:22.379 17:28:48 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:22.379 17:28:48 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:22.379 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:22.379 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:12:22.379 altname enp9s0f0np0 00:12:22.379 inet 192.168.100.8/24 scope global mlx_0_0 00:12:22.379 valid_lft forever preferred_lft forever 00:12:22.379 17:28:48 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:22.379 17:28:48 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:22.379 17:28:48 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:22.379 17:28:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:22.379 17:28:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:22.379 17:28:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:22.379 17:28:48 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:22.379 17:28:48 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:22.379 17:28:48 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:22.379 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:22.379 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:12:22.379 altname enp9s0f1np1 00:12:22.379 inet 192.168.100.9/24 scope global mlx_0_1 00:12:22.379 valid_lft forever preferred_lft forever 00:12:22.379 17:28:48 -- nvmf/common.sh@411 -- # return 0 00:12:22.379 17:28:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:22.379 17:28:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:22.379 17:28:48 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:22.379 17:28:48 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:22.379 17:28:48 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:22.379 17:28:48 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:22.379 17:28:48 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:22.379 17:28:48 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:22.379 17:28:48 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:22.379 17:28:48 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:22.379 17:28:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:22.379 17:28:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.379 17:28:48 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:22.379 17:28:48 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:22.379 17:28:48 -- nvmf/common.sh@105 -- # continue 2 00:12:22.379 17:28:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:22.379 17:28:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.379 17:28:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:22.379 17:28:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.379 17:28:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:22.379 17:28:48 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:22.379 17:28:48 -- nvmf/common.sh@105 -- # continue 2 00:12:22.379 17:28:48 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:22.379 17:28:48 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:22.379 17:28:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:22.379 17:28:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:22.379 17:28:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:22.379 17:28:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:22.379 17:28:48 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:22.379 17:28:48 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:22.379 17:28:48 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:22.379 17:28:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:22.379 17:28:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:22.379 17:28:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:22.379 17:28:48 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:22.379 192.168.100.9' 00:12:22.379 17:28:48 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:22.379 192.168.100.9' 00:12:22.379 17:28:48 -- nvmf/common.sh@446 -- # head -n 1 00:12:22.379 17:28:48 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:22.379 17:28:48 -- nvmf/common.sh@447 -- # tail -n +2 00:12:22.379 17:28:48 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:22.379 192.168.100.9' 00:12:22.379 17:28:48 -- nvmf/common.sh@447 -- # head -n 1 00:12:22.379 17:28:48 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:22.379 17:28:48 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:22.379 17:28:48 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:22.379 17:28:48 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:22.379 17:28:48 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:22.379 17:28:48 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:22.379 17:28:48 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:22.379 17:28:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:22.379 17:28:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:22.379 17:28:48 -- common/autotest_common.sh@10 -- # set +x 00:12:22.379 17:28:48 -- nvmf/common.sh@470 -- # nvmfpid=1995314 00:12:22.379 17:28:48 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:22.379 17:28:48 -- nvmf/common.sh@471 -- # waitforlisten 1995314 00:12:22.379 17:28:48 -- common/autotest_common.sh@817 -- # '[' -z 1995314 ']' 00:12:22.379 17:28:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.379 17:28:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:22.379 17:28:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.379 17:28:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:22.379 17:28:48 -- common/autotest_common.sh@10 -- # set +x 00:12:22.379 [2024-04-18 17:28:48.805808] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:12:22.380 [2024-04-18 17:28:48.805891] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.380 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.380 [2024-04-18 17:28:48.869216] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.639 [2024-04-18 17:28:48.984756] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.639 [2024-04-18 17:28:48.984808] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.639 [2024-04-18 17:28:48.984824] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.639 [2024-04-18 17:28:48.984838] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.639 [2024-04-18 17:28:48.984849] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.639 [2024-04-18 17:28:48.984945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:22.639 [2024-04-18 17:28:48.984999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:22.639 [2024-04-18 17:28:48.985254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:22.639 [2024-04-18 17:28:48.985259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.571 17:28:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:23.571 17:28:49 -- common/autotest_common.sh@850 -- # return 0 00:12:23.571 17:28:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:23.571 17:28:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:23.571 17:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:23.571 17:28:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.571 17:28:49 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:23.571 17:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:23.571 17:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:23.571 [2024-04-18 17:28:49.801164] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf7f420/0xf83910) succeed. 00:12:23.571 [2024-04-18 17:28:49.812043] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf80a10/0xfc4fa0) succeed. 00:12:23.571 17:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.571 17:28:49 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:23.571 17:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:23.571 17:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:23.571 Malloc0 00:12:23.571 17:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.571 17:28:49 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:23.571 17:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:23.571 17:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:23.571 17:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.571 17:28:49 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:23.571 17:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:23.571 17:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:23.571 17:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.571 17:28:49 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:23.571 17:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:23.571 17:28:49 -- common/autotest_common.sh@10 -- # set +x 00:12:23.571 [2024-04-18 17:28:50.003140] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:23.571 17:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:23.571 17:28:50 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:23.571 17:28:50 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:23.571 17:28:50 -- nvmf/common.sh@521 -- # config=() 00:12:23.571 17:28:50 -- nvmf/common.sh@521 -- # local subsystem config 00:12:23.571 17:28:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:23.571 17:28:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:23.571 { 00:12:23.571 "params": { 00:12:23.571 "name": "Nvme$subsystem", 00:12:23.571 "trtype": "$TEST_TRANSPORT", 00:12:23.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:23.571 "adrfam": "ipv4", 00:12:23.571 "trsvcid": "$NVMF_PORT", 00:12:23.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:23.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:23.571 "hdgst": ${hdgst:-false}, 00:12:23.571 "ddgst": ${ddgst:-false} 00:12:23.571 }, 00:12:23.571 "method": "bdev_nvme_attach_controller" 00:12:23.571 } 00:12:23.571 EOF 00:12:23.571 )") 00:12:23.571 17:28:50 -- nvmf/common.sh@543 -- # cat 00:12:23.571 17:28:50 -- nvmf/common.sh@545 -- # jq . 00:12:23.571 17:28:50 -- nvmf/common.sh@546 -- # IFS=, 00:12:23.571 17:28:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:23.571 "params": { 00:12:23.571 "name": "Nvme1", 00:12:23.571 "trtype": "rdma", 00:12:23.571 "traddr": "192.168.100.8", 00:12:23.571 "adrfam": "ipv4", 00:12:23.571 "trsvcid": "4420", 00:12:23.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:23.571 "hdgst": false, 00:12:23.571 "ddgst": false 00:12:23.571 }, 00:12:23.571 "method": "bdev_nvme_attach_controller" 00:12:23.571 }' 00:12:23.571 [2024-04-18 17:28:50.047169] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:12:23.571 [2024-04-18 17:28:50.047259] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1995531 ] 00:12:23.571 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.829 [2024-04-18 17:28:50.115908] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:23.829 [2024-04-18 17:28:50.230407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.829 [2024-04-18 17:28:50.230461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.829 [2024-04-18 17:28:50.230464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.087 I/O targets: 00:12:24.087 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:24.087 00:12:24.087 00:12:24.087 CUnit - A unit testing framework for C - Version 2.1-3 00:12:24.087 http://cunit.sourceforge.net/ 00:12:24.087 00:12:24.087 00:12:24.087 Suite: bdevio tests on: Nvme1n1 00:12:24.087 Test: blockdev write read block ...passed 00:12:24.087 Test: blockdev write zeroes read block ...passed 00:12:24.087 Test: blockdev write zeroes read no split ...passed 00:12:24.087 Test: blockdev write zeroes read split ...passed 00:12:24.087 Test: blockdev write zeroes read split partial ...passed 00:12:24.087 Test: blockdev reset ...[2024-04-18 17:28:50.462090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:24.087 [2024-04-18 17:28:50.486968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:12:24.087 [2024-04-18 17:28:50.512713] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:24.087 passed 00:12:24.087 Test: blockdev write read 8 blocks ...passed 00:12:24.087 Test: blockdev write read size > 128k ...passed 00:12:24.087 Test: blockdev write read invalid size ...passed 00:12:24.087 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.087 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.088 Test: blockdev write read max offset ...passed 00:12:24.088 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.088 Test: blockdev writev readv 8 blocks ...passed 00:12:24.088 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.088 Test: blockdev writev readv block ...passed 00:12:24.088 Test: blockdev writev readv size > 128k ...passed 00:12:24.088 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.088 Test: blockdev comparev and writev ...[2024-04-18 17:28:50.516490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.088 [2024-04-18 17:28:50.516527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:24.088 [2024-04-18 17:28:50.516546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.088 [2024-04-18 17:28:50.516562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:24.088 [2024-04-18 17:28:50.516760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.088 [2024-04-18 17:28:50.516783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:24.088 [2024-04-18 17:28:50.516800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.088 [2024-04-18 17:28:50.516814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:24.088 [2024-04-18 17:28:50.517007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.088 [2024-04-18 17:28:50.517030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:24.088 [2024-04-18 17:28:50.517046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.088 [2024-04-18 17:28:50.517060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:24.088 [2024-04-18 17:28:50.517241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.088 [2024-04-18 17:28:50.517264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:24.088 [2024-04-18 17:28:50.517280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.088 [2024-04-18 17:28:50.517294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:24.088 passed 00:12:24.088 Test: blockdev nvme passthru rw ...passed 00:12:24.088 Test: blockdev nvme passthru vendor specific ...[2024-04-18 17:28:50.517675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:24.088 [2024-04-18 17:28:50.517700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:24.088 [2024-04-18 17:28:50.517757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:24.088 [2024-04-18 17:28:50.517777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:24.088 [2024-04-18 17:28:50.517834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:24.088 [2024-04-18 17:28:50.517854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:24.088 [2024-04-18 17:28:50.517910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:12:24.088 [2024-04-18 17:28:50.517929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:24.088 passed 00:12:24.088 Test: blockdev nvme admin passthru ...passed 00:12:24.088 Test: blockdev copy ...passed 00:12:24.088 00:12:24.088 Run Summary: Type Total Ran Passed Failed Inactive 00:12:24.088 suites 1 1 n/a 0 0 00:12:24.088 tests 23 23 23 0 0 00:12:24.088 asserts 152 152 152 0 n/a 00:12:24.088 00:12:24.088 Elapsed time = 0.178 seconds 00:12:24.346 17:28:50 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.346 17:28:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.346 17:28:50 -- common/autotest_common.sh@10 -- # set +x 00:12:24.346 17:28:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.346 17:28:50 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:24.346 17:28:50 -- target/bdevio.sh@30 -- # nvmftestfini 00:12:24.346 17:28:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:24.346 17:28:50 -- nvmf/common.sh@117 -- # sync 00:12:24.346 17:28:50 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:24.346 17:28:50 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:24.346 17:28:50 -- nvmf/common.sh@120 -- # set +e 00:12:24.346 17:28:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:24.346 17:28:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:24.346 rmmod nvme_rdma 00:12:24.346 rmmod nvme_fabrics 00:12:24.346 17:28:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:24.346 17:28:50 -- nvmf/common.sh@124 -- # set -e 00:12:24.346 17:28:50 -- nvmf/common.sh@125 -- # return 0 00:12:24.346 17:28:50 -- nvmf/common.sh@478 -- # '[' -n 1995314 ']' 00:12:24.346 17:28:50 -- nvmf/common.sh@479 -- # killprocess 1995314 00:12:24.346 17:28:50 -- common/autotest_common.sh@936 -- # '[' -z 1995314 ']' 00:12:24.346 17:28:50 -- common/autotest_common.sh@940 -- # kill -0 1995314 00:12:24.346 17:28:50 -- common/autotest_common.sh@941 -- # uname 00:12:24.346 17:28:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:24.346 17:28:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1995314 00:12:24.346 17:28:50 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:12:24.346 17:28:50 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:12:24.346 17:28:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1995314' 00:12:24.346 killing process with pid 1995314 00:12:24.346 17:28:50 -- common/autotest_common.sh@955 -- # kill 1995314 00:12:24.346 17:28:50 -- common/autotest_common.sh@960 -- # wait 1995314 00:12:24.914 17:28:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:24.914 17:28:51 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:24.914 00:12:24.914 real 0m4.509s 00:12:24.914 user 0m10.880s 00:12:24.914 sys 0m1.981s 00:12:24.914 17:28:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:24.914 17:28:51 -- common/autotest_common.sh@10 -- # set +x 00:12:24.915 ************************************ 00:12:24.915 END TEST nvmf_bdevio 00:12:24.915 ************************************ 00:12:24.915 17:28:51 -- nvmf/nvmf.sh@58 -- # '[' rdma = tcp ']' 00:12:24.915 17:28:51 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:12:24.915 17:28:51 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:12:24.915 17:28:51 -- nvmf/nvmf.sh@71 -- # '[' rdma = tcp ']' 00:12:24.915 17:28:51 -- nvmf/nvmf.sh@77 -- # [[ rdma == \r\d\m\a ]] 00:12:24.915 17:28:51 -- nvmf/nvmf.sh@78 -- # run_test nvmf_device_removal test/nvmf/target/device_removal.sh --transport=rdma 00:12:24.915 17:28:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:24.915 17:28:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:24.915 17:28:51 -- common/autotest_common.sh@10 -- # set +x 00:12:24.915 ************************************ 00:12:24.915 START TEST nvmf_device_removal 00:12:24.915 ************************************ 00:12:24.915 17:28:51 -- common/autotest_common.sh@1111 -- # test/nvmf/target/device_removal.sh --transport=rdma 00:12:24.915 * Looking for test storage... 00:12:24.915 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:24.915 17:28:51 -- target/device_removal.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:12:24.915 17:28:51 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:24.915 17:28:51 -- common/autotest_common.sh@34 -- # set -e 00:12:24.915 17:28:51 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:24.915 17:28:51 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:24.915 17:28:51 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:12:24.915 17:28:51 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:24.915 17:28:51 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:12:24.915 17:28:51 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:24.915 17:28:51 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:24.915 17:28:51 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:24.915 17:28:51 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:24.915 17:28:51 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:24.915 17:28:51 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:24.915 17:28:51 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:24.915 17:28:51 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:24.915 17:28:51 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:24.915 17:28:51 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:24.915 17:28:51 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:24.915 17:28:51 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:24.915 17:28:51 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:24.915 17:28:51 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:24.915 17:28:51 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:24.915 17:28:51 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:24.915 17:28:51 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:24.915 17:28:51 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:24.915 17:28:51 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:24.915 17:28:51 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:24.915 17:28:51 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:24.915 17:28:51 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:24.915 17:28:51 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:24.915 17:28:51 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:24.915 17:28:51 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:24.915 17:28:51 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:24.915 17:28:51 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:24.915 17:28:51 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:12:24.915 17:28:51 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:12:24.915 17:28:51 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:12:24.915 17:28:51 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:12:24.915 17:28:51 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:12:24.915 17:28:51 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:12:24.915 17:28:51 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:12:24.915 17:28:51 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:12:24.915 17:28:51 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:12:24.915 17:28:51 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:12:24.915 17:28:51 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:12:24.915 17:28:51 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:12:24.915 17:28:51 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:12:24.915 17:28:51 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:12:24.915 17:28:51 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:12:24.915 17:28:51 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:12:24.915 17:28:51 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:24.915 17:28:51 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:12:24.915 17:28:51 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:12:24.915 17:28:51 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:12:24.915 17:28:51 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:24.915 17:28:51 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:12:24.915 17:28:51 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:12:24.915 17:28:51 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:12:24.915 17:28:51 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:12:24.915 17:28:51 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:12:24.915 17:28:51 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:12:24.915 17:28:51 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:12:24.915 17:28:51 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:12:24.915 17:28:51 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:12:24.915 17:28:51 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:12:24.915 17:28:51 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:12:24.915 17:28:51 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:12:24.915 17:28:51 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:12:24.915 17:28:51 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:12:24.915 17:28:51 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:12:24.915 17:28:51 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:12:24.915 17:28:51 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:12:24.915 17:28:51 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:12:24.915 17:28:51 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:12:24.915 17:28:51 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:24.915 17:28:51 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:12:24.915 17:28:51 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:12:24.915 17:28:51 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:12:24.915 17:28:51 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:12:24.915 17:28:51 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:12:24.915 17:28:51 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:12:24.915 17:28:51 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:12:24.915 17:28:51 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:12:24.915 17:28:51 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:12:24.915 17:28:51 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:12:24.915 17:28:51 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:12:24.915 17:28:51 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:24.915 17:28:51 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:12:24.915 17:28:51 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:12:24.915 17:28:51 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:24.915 17:28:51 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:24.915 17:28:51 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:24.915 17:28:51 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:24.915 17:28:51 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:24.915 17:28:51 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:24.915 17:28:51 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:12:24.915 17:28:51 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:24.915 17:28:51 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:24.915 17:28:51 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:24.915 17:28:51 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:24.915 17:28:51 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:24.915 17:28:51 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:24.915 17:28:51 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:24.915 17:28:51 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:12:24.915 17:28:51 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:24.915 #define SPDK_CONFIG_H 00:12:24.915 #define SPDK_CONFIG_APPS 1 00:12:24.915 #define SPDK_CONFIG_ARCH native 00:12:24.915 #undef SPDK_CONFIG_ASAN 00:12:24.915 #undef SPDK_CONFIG_AVAHI 00:12:24.915 #undef SPDK_CONFIG_CET 00:12:24.915 #define SPDK_CONFIG_COVERAGE 1 00:12:24.915 #define SPDK_CONFIG_CROSS_PREFIX 00:12:24.915 #undef SPDK_CONFIG_CRYPTO 00:12:24.915 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:24.915 #undef SPDK_CONFIG_CUSTOMOCF 00:12:24.915 #undef SPDK_CONFIG_DAOS 00:12:24.915 #define SPDK_CONFIG_DAOS_DIR 00:12:24.915 #define SPDK_CONFIG_DEBUG 1 00:12:24.915 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:24.915 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:12:24.915 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:24.916 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:24.916 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:24.916 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:24.916 #define SPDK_CONFIG_EXAMPLES 1 00:12:24.916 #undef SPDK_CONFIG_FC 00:12:24.916 #define SPDK_CONFIG_FC_PATH 00:12:24.916 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:24.916 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:24.916 #undef SPDK_CONFIG_FUSE 00:12:24.916 #undef SPDK_CONFIG_FUZZER 00:12:24.916 #define SPDK_CONFIG_FUZZER_LIB 00:12:24.916 #undef SPDK_CONFIG_GOLANG 00:12:24.916 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:24.916 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:24.916 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:24.916 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:12:24.916 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:24.916 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:24.916 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:24.916 #define SPDK_CONFIG_IDXD 1 00:12:24.916 #undef SPDK_CONFIG_IDXD_KERNEL 00:12:24.916 #undef SPDK_CONFIG_IPSEC_MB 00:12:24.916 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:24.916 #define SPDK_CONFIG_ISAL 1 00:12:24.916 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:24.916 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:24.916 #define SPDK_CONFIG_LIBDIR 00:12:24.916 #undef SPDK_CONFIG_LTO 00:12:24.916 #define SPDK_CONFIG_MAX_LCORES 00:12:24.916 #define SPDK_CONFIG_NVME_CUSE 1 00:12:24.916 #undef SPDK_CONFIG_OCF 00:12:24.916 #define SPDK_CONFIG_OCF_PATH 00:12:24.916 #define SPDK_CONFIG_OPENSSL_PATH 00:12:24.916 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:24.916 #define SPDK_CONFIG_PGO_DIR 00:12:24.916 #undef SPDK_CONFIG_PGO_USE 00:12:24.916 #define SPDK_CONFIG_PREFIX /usr/local 00:12:24.916 #undef SPDK_CONFIG_RAID5F 00:12:24.916 #undef SPDK_CONFIG_RBD 00:12:24.916 #define SPDK_CONFIG_RDMA 1 00:12:24.916 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:24.916 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:24.916 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:24.916 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:24.916 #define SPDK_CONFIG_SHARED 1 00:12:24.916 #undef SPDK_CONFIG_SMA 00:12:24.916 #define SPDK_CONFIG_TESTS 1 00:12:24.916 #undef SPDK_CONFIG_TSAN 00:12:24.916 #define SPDK_CONFIG_UBLK 1 00:12:24.916 #define SPDK_CONFIG_UBSAN 1 00:12:24.916 #undef SPDK_CONFIG_UNIT_TESTS 00:12:24.916 #undef SPDK_CONFIG_URING 00:12:24.916 #define SPDK_CONFIG_URING_PATH 00:12:24.916 #undef SPDK_CONFIG_URING_ZNS 00:12:24.916 #undef SPDK_CONFIG_USDT 00:12:24.916 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:24.916 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:24.916 #undef SPDK_CONFIG_VFIO_USER 00:12:24.916 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:24.916 #define SPDK_CONFIG_VHOST 1 00:12:24.916 #define SPDK_CONFIG_VIRTIO 1 00:12:24.916 #undef SPDK_CONFIG_VTUNE 00:12:24.916 #define SPDK_CONFIG_VTUNE_DIR 00:12:24.916 #define SPDK_CONFIG_WERROR 1 00:12:24.916 #define SPDK_CONFIG_WPDK_DIR 00:12:24.916 #undef SPDK_CONFIG_XNVME 00:12:24.916 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:24.916 17:28:51 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:24.916 17:28:51 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:24.916 17:28:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.916 17:28:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.916 17:28:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.916 17:28:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.916 17:28:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.916 17:28:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.916 17:28:51 -- paths/export.sh@5 -- # export PATH 00:12:24.916 17:28:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.916 17:28:51 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:24.916 17:28:51 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:24.916 17:28:51 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:24.916 17:28:51 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:24.916 17:28:51 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:24.916 17:28:51 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:24.916 17:28:51 -- pm/common@67 -- # TEST_TAG=N/A 00:12:24.916 17:28:51 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:12:24.916 17:28:51 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:12:24.916 17:28:51 -- pm/common@71 -- # uname -s 00:12:24.916 17:28:51 -- pm/common@71 -- # PM_OS=Linux 00:12:24.916 17:28:51 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:24.916 17:28:51 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:12:24.916 17:28:51 -- pm/common@76 -- # [[ Linux == Linux ]] 00:12:24.916 17:28:51 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:12:24.916 17:28:51 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:12:24.916 17:28:51 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:24.916 17:28:51 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:24.916 17:28:51 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:12:24.916 17:28:51 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:12:24.916 17:28:51 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:12:24.916 17:28:51 -- common/autotest_common.sh@57 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:12:24.916 17:28:51 -- common/autotest_common.sh@61 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:24.916 17:28:51 -- common/autotest_common.sh@63 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:12:24.916 17:28:51 -- common/autotest_common.sh@65 -- # : 1 00:12:24.916 17:28:51 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:24.916 17:28:51 -- common/autotest_common.sh@67 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:12:24.916 17:28:51 -- common/autotest_common.sh@69 -- # : 00:12:24.916 17:28:51 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:12:24.916 17:28:51 -- common/autotest_common.sh@71 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:12:24.916 17:28:51 -- common/autotest_common.sh@73 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:12:24.916 17:28:51 -- common/autotest_common.sh@75 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:12:24.916 17:28:51 -- common/autotest_common.sh@77 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:24.916 17:28:51 -- common/autotest_common.sh@79 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:12:24.916 17:28:51 -- common/autotest_common.sh@81 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:12:24.916 17:28:51 -- common/autotest_common.sh@83 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:12:24.916 17:28:51 -- common/autotest_common.sh@85 -- # : 1 00:12:24.916 17:28:51 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:12:24.916 17:28:51 -- common/autotest_common.sh@87 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:12:24.916 17:28:51 -- common/autotest_common.sh@89 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:12:24.916 17:28:51 -- common/autotest_common.sh@91 -- # : 1 00:12:24.916 17:28:51 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:12:24.916 17:28:51 -- common/autotest_common.sh@93 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:12:24.916 17:28:51 -- common/autotest_common.sh@95 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:24.916 17:28:51 -- common/autotest_common.sh@97 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:12:24.916 17:28:51 -- common/autotest_common.sh@99 -- # : 0 00:12:24.916 17:28:51 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:12:24.916 17:28:51 -- common/autotest_common.sh@101 -- # : rdma 00:12:24.917 17:28:51 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:24.917 17:28:51 -- common/autotest_common.sh@103 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:12:24.917 17:28:51 -- common/autotest_common.sh@105 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:12:24.917 17:28:51 -- common/autotest_common.sh@107 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:12:24.917 17:28:51 -- common/autotest_common.sh@109 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:12:24.917 17:28:51 -- common/autotest_common.sh@111 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:12:24.917 17:28:51 -- common/autotest_common.sh@113 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:12:24.917 17:28:51 -- common/autotest_common.sh@115 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:12:24.917 17:28:51 -- common/autotest_common.sh@117 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:24.917 17:28:51 -- common/autotest_common.sh@119 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:12:24.917 17:28:51 -- common/autotest_common.sh@121 -- # : 1 00:12:24.917 17:28:51 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:12:24.917 17:28:51 -- common/autotest_common.sh@123 -- # : 00:12:24.917 17:28:51 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:24.917 17:28:51 -- common/autotest_common.sh@125 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:12:24.917 17:28:51 -- common/autotest_common.sh@127 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:12:24.917 17:28:51 -- common/autotest_common.sh@129 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:12:24.917 17:28:51 -- common/autotest_common.sh@131 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:12:24.917 17:28:51 -- common/autotest_common.sh@133 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:12:24.917 17:28:51 -- common/autotest_common.sh@135 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:12:24.917 17:28:51 -- common/autotest_common.sh@137 -- # : 00:12:24.917 17:28:51 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:12:24.917 17:28:51 -- common/autotest_common.sh@139 -- # : true 00:12:24.917 17:28:51 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:12:24.917 17:28:51 -- common/autotest_common.sh@141 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:12:24.917 17:28:51 -- common/autotest_common.sh@143 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:12:24.917 17:28:51 -- common/autotest_common.sh@145 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:12:24.917 17:28:51 -- common/autotest_common.sh@147 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:12:24.917 17:28:51 -- common/autotest_common.sh@149 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:12:24.917 17:28:51 -- common/autotest_common.sh@151 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:12:24.917 17:28:51 -- common/autotest_common.sh@153 -- # : mlx5 00:12:24.917 17:28:51 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:12:24.917 17:28:51 -- common/autotest_common.sh@155 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:12:24.917 17:28:51 -- common/autotest_common.sh@157 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:12:24.917 17:28:51 -- common/autotest_common.sh@159 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:12:24.917 17:28:51 -- common/autotest_common.sh@161 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:12:24.917 17:28:51 -- common/autotest_common.sh@163 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:12:24.917 17:28:51 -- common/autotest_common.sh@166 -- # : 00:12:24.917 17:28:51 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:12:24.917 17:28:51 -- common/autotest_common.sh@168 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:12:24.917 17:28:51 -- common/autotest_common.sh@170 -- # : 0 00:12:24.917 17:28:51 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:24.917 17:28:51 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:24.917 17:28:51 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:24.917 17:28:51 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:12:24.917 17:28:51 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:12:24.917 17:28:51 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:24.917 17:28:51 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:24.917 17:28:51 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:24.917 17:28:51 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:24.917 17:28:51 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:24.917 17:28:51 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:24.917 17:28:51 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:24.917 17:28:51 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:24.917 17:28:51 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:24.917 17:28:51 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:12:24.917 17:28:51 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:24.917 17:28:51 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:24.917 17:28:51 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:24.917 17:28:51 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:24.917 17:28:51 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:24.917 17:28:51 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:12:24.917 17:28:51 -- common/autotest_common.sh@199 -- # cat 00:12:24.917 17:28:51 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:12:24.917 17:28:51 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:24.917 17:28:51 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:24.917 17:28:51 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:24.917 17:28:51 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:24.917 17:28:51 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:12:24.917 17:28:51 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:12:24.917 17:28:51 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:24.917 17:28:51 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:24.917 17:28:51 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:24.917 17:28:51 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:24.917 17:28:51 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:24.917 17:28:51 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:24.917 17:28:51 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:24.918 17:28:51 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:24.918 17:28:51 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:24.918 17:28:51 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:24.918 17:28:51 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:24.918 17:28:51 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:24.918 17:28:51 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:12:24.918 17:28:51 -- common/autotest_common.sh@252 -- # export valgrind= 00:12:24.918 17:28:51 -- common/autotest_common.sh@252 -- # valgrind= 00:12:24.918 17:28:51 -- common/autotest_common.sh@258 -- # uname -s 00:12:24.918 17:28:51 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:12:24.918 17:28:51 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:12:24.918 17:28:51 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:12:24.918 17:28:51 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:12:24.918 17:28:51 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:12:24.918 17:28:51 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:12:24.918 17:28:51 -- common/autotest_common.sh@268 -- # MAKE=make 00:12:24.918 17:28:51 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:12:24.918 17:28:51 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:12:24.918 17:28:51 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:12:24.918 17:28:51 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:12:24.918 17:28:51 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:12:24.918 17:28:51 -- common/autotest_common.sh@289 -- # for i in "$@" 00:12:24.918 17:28:51 -- common/autotest_common.sh@290 -- # case "$i" in 00:12:24.918 17:28:51 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=rdma 00:12:24.918 17:28:51 -- common/autotest_common.sh@307 -- # [[ -z 1995716 ]] 00:12:24.918 17:28:51 -- common/autotest_common.sh@307 -- # kill -0 1995716 00:12:24.918 17:28:51 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:12:24.918 17:28:51 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:12:24.918 17:28:51 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:12:24.918 17:28:51 -- common/autotest_common.sh@320 -- # local mount target_dir 00:12:24.918 17:28:51 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:12:24.918 17:28:51 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:12:24.918 17:28:51 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:12:24.918 17:28:51 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:12:24.918 17:28:51 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.iBMKzm 00:12:24.918 17:28:51 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:24.918 17:28:51 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:12:24.918 17:28:51 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:12:24.918 17:28:51 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.iBMKzm/tests/target /tmp/spdk.iBMKzm 00:12:24.918 17:28:51 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:12:24.918 17:28:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:12:24.918 17:28:51 -- common/autotest_common.sh@316 -- # df -T 00:12:24.918 17:28:51 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:12:24.918 17:28:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:12:24.918 17:28:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:12:24.918 17:28:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:12:24.918 17:28:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:12:24.918 17:28:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:12:25.177 17:28:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:12:25.177 17:28:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:12:25.177 17:28:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:12:25.177 17:28:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=996749312 00:12:25.177 17:28:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:12:25.177 17:28:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=4287680512 00:12:25.177 17:28:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:12:25.177 17:28:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:12:25.177 17:28:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:12:25.177 17:28:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=50014056448 00:12:25.177 17:28:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61994717184 00:12:25.177 17:28:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=11980660736 00:12:25.177 17:28:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:12:25.177 17:28:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:12:25.177 17:28:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:12:25.177 17:28:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=30938091520 00:12:25.177 17:28:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997356544 00:12:25.177 17:28:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=59265024 00:12:25.177 17:28:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:12:25.177 17:28:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:12:25.177 17:28:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:12:25.177 17:28:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=12376522752 00:12:25.177 17:28:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12398944256 00:12:25.177 17:28:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=22421504 00:12:25.177 17:28:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:12:25.177 17:28:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:12:25.177 17:28:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:12:25.177 17:28:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=30996914176 00:12:25.177 17:28:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997360640 00:12:25.177 17:28:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=446464 00:12:25.177 17:28:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:12:25.177 17:28:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:12:25.177 17:28:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:12:25.177 17:28:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=6199463936 00:12:25.177 17:28:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6199468032 00:12:25.177 17:28:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:12:25.177 17:28:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:12:25.177 17:28:51 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:12:25.177 * Looking for test storage... 00:12:25.177 17:28:51 -- common/autotest_common.sh@357 -- # local target_space new_size 00:12:25.177 17:28:51 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:12:25.177 17:28:51 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:25.177 17:28:51 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:25.177 17:28:51 -- common/autotest_common.sh@361 -- # mount=/ 00:12:25.177 17:28:51 -- common/autotest_common.sh@363 -- # target_space=50014056448 00:12:25.177 17:28:51 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:12:25.177 17:28:51 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:12:25.177 17:28:51 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:12:25.177 17:28:51 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:12:25.177 17:28:51 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:12:25.177 17:28:51 -- common/autotest_common.sh@370 -- # new_size=14195253248 00:12:25.177 17:28:51 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:25.177 17:28:51 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:25.177 17:28:51 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:25.177 17:28:51 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:25.177 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:25.177 17:28:51 -- common/autotest_common.sh@378 -- # return 0 00:12:25.177 17:28:51 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:12:25.177 17:28:51 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:12:25.177 17:28:51 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:25.177 17:28:51 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:25.177 17:28:51 -- common/autotest_common.sh@1673 -- # true 00:12:25.177 17:28:51 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:12:25.177 17:28:51 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:12:25.177 17:28:51 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:12:25.177 17:28:51 -- common/autotest_common.sh@27 -- # exec 00:12:25.177 17:28:51 -- common/autotest_common.sh@29 -- # exec 00:12:25.177 17:28:51 -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:25.177 17:28:51 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:25.177 17:28:51 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:25.177 17:28:51 -- common/autotest_common.sh@18 -- # set -x 00:12:25.177 17:28:51 -- target/device_removal.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.177 17:28:51 -- nvmf/common.sh@7 -- # uname -s 00:12:25.177 17:28:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.177 17:28:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.177 17:28:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.177 17:28:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.177 17:28:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.177 17:28:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.177 17:28:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.177 17:28:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.177 17:28:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.177 17:28:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.177 17:28:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:25.177 17:28:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:25.177 17:28:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.177 17:28:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.177 17:28:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.177 17:28:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.177 17:28:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:25.177 17:28:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.177 17:28:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.177 17:28:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.177 17:28:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.177 17:28:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.177 17:28:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.177 17:28:51 -- paths/export.sh@5 -- # export PATH 00:12:25.178 17:28:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.178 17:28:51 -- nvmf/common.sh@47 -- # : 0 00:12:25.178 17:28:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.178 17:28:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.178 17:28:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.178 17:28:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.178 17:28:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.178 17:28:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.178 17:28:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.178 17:28:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.178 17:28:51 -- target/device_removal.sh@13 -- # tgt_core_mask=0x3 00:12:25.178 17:28:51 -- target/device_removal.sh@14 -- # bdevperf_core_mask=0x4 00:12:25.178 17:28:51 -- target/device_removal.sh@15 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:25.178 17:28:51 -- target/device_removal.sh@16 -- # bdevperf_rpc_pid=-1 00:12:25.178 17:28:51 -- target/device_removal.sh@18 -- # nvmftestinit 00:12:25.178 17:28:51 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:25.178 17:28:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.178 17:28:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:25.178 17:28:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:25.178 17:28:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:25.178 17:28:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.178 17:28:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.178 17:28:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.178 17:28:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:25.178 17:28:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:25.178 17:28:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:25.178 17:28:51 -- common/autotest_common.sh@10 -- # set +x 00:12:27.080 17:28:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:27.080 17:28:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:27.080 17:28:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:27.080 17:28:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:27.080 17:28:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:27.080 17:28:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:27.080 17:28:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:27.080 17:28:53 -- nvmf/common.sh@295 -- # net_devs=() 00:12:27.080 17:28:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:27.080 17:28:53 -- nvmf/common.sh@296 -- # e810=() 00:12:27.080 17:28:53 -- nvmf/common.sh@296 -- # local -ga e810 00:12:27.080 17:28:53 -- nvmf/common.sh@297 -- # x722=() 00:12:27.080 17:28:53 -- nvmf/common.sh@297 -- # local -ga x722 00:12:27.080 17:28:53 -- nvmf/common.sh@298 -- # mlx=() 00:12:27.080 17:28:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:27.080 17:28:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.080 17:28:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.080 17:28:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.080 17:28:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.080 17:28:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.080 17:28:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.080 17:28:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.080 17:28:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.080 17:28:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.080 17:28:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.080 17:28:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.080 17:28:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:27.080 17:28:53 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:27.080 17:28:53 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:27.080 17:28:53 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:27.080 17:28:53 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:27.080 17:28:53 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:27.080 17:28:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:27.080 17:28:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.080 17:28:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:12:27.080 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:12:27.080 17:28:53 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:27.080 17:28:53 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:27.080 17:28:53 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:27.080 17:28:53 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:27.080 17:28:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.080 17:28:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:12:27.080 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:12:27.080 17:28:53 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:27.080 17:28:53 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:27.080 17:28:53 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:12:27.080 17:28:53 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:27.080 17:28:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:27.080 17:28:53 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:27.080 17:28:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.080 17:28:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.080 17:28:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:27.080 17:28:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.080 17:28:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:12:27.081 Found net devices under 0000:09:00.0: mlx_0_0 00:12:27.081 17:28:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.081 17:28:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.081 17:28:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.081 17:28:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:27.081 17:28:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.081 17:28:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:12:27.081 Found net devices under 0000:09:00.1: mlx_0_1 00:12:27.081 17:28:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.081 17:28:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:27.081 17:28:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:27.081 17:28:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:27.081 17:28:53 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:27.081 17:28:53 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:27.081 17:28:53 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:27.081 17:28:53 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:27.081 17:28:53 -- nvmf/common.sh@58 -- # uname 00:12:27.081 17:28:53 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:27.081 17:28:53 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:27.081 17:28:53 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:27.081 17:28:53 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:27.081 17:28:53 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:27.081 17:28:53 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:27.081 17:28:53 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:27.081 17:28:53 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:27.081 17:28:53 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:27.081 17:28:53 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:27.081 17:28:53 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:27.081 17:28:53 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:27.081 17:28:53 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:27.081 17:28:53 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:27.081 17:28:53 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:27.081 17:28:53 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:27.081 17:28:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:27.081 17:28:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.081 17:28:53 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:27.081 17:28:53 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:27.081 17:28:53 -- nvmf/common.sh@105 -- # continue 2 00:12:27.081 17:28:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:27.081 17:28:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.081 17:28:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:27.081 17:28:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.081 17:28:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:27.081 17:28:53 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:27.081 17:28:53 -- nvmf/common.sh@105 -- # continue 2 00:12:27.081 17:28:53 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:27.081 17:28:53 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:27.081 17:28:53 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:27.081 17:28:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:27.081 17:28:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:27.081 17:28:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:27.081 17:28:53 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:27.081 17:28:53 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:27.081 17:28:53 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:27.081 8: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:27.081 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:12:27.081 altname enp9s0f0np0 00:12:27.081 inet 192.168.100.8/24 scope global mlx_0_0 00:12:27.081 valid_lft forever preferred_lft forever 00:12:27.081 17:28:53 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:27.081 17:28:53 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:27.081 17:28:53 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:27.081 17:28:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:27.081 17:28:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:27.081 17:28:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:27.081 17:28:53 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:27.081 17:28:53 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:27.081 17:28:53 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:27.081 9: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:27.081 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:12:27.081 altname enp9s0f1np1 00:12:27.081 inet 192.168.100.9/24 scope global mlx_0_1 00:12:27.081 valid_lft forever preferred_lft forever 00:12:27.081 17:28:53 -- nvmf/common.sh@411 -- # return 0 00:12:27.081 17:28:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:27.081 17:28:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:27.081 17:28:53 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:27.081 17:28:53 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:27.081 17:28:53 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:27.081 17:28:53 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:27.081 17:28:53 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:27.081 17:28:53 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:27.081 17:28:53 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:27.081 17:28:53 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:27.081 17:28:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:27.081 17:28:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.081 17:28:53 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:27.081 17:28:53 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:27.081 17:28:53 -- nvmf/common.sh@105 -- # continue 2 00:12:27.081 17:28:53 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:27.081 17:28:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.081 17:28:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:27.081 17:28:53 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.081 17:28:53 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:27.081 17:28:53 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:27.081 17:28:53 -- nvmf/common.sh@105 -- # continue 2 00:12:27.081 17:28:53 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:27.081 17:28:53 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:27.081 17:28:53 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:27.081 17:28:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:27.081 17:28:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:27.081 17:28:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:27.081 17:28:53 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:27.081 17:28:53 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:27.081 17:28:53 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:27.081 17:28:53 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:27.081 17:28:53 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:27.081 17:28:53 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:27.081 17:28:53 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:27.081 192.168.100.9' 00:12:27.081 17:28:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:27.081 192.168.100.9' 00:12:27.081 17:28:53 -- nvmf/common.sh@446 -- # head -n 1 00:12:27.081 17:28:53 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:27.081 17:28:53 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:27.081 192.168.100.9' 00:12:27.081 17:28:53 -- nvmf/common.sh@447 -- # tail -n +2 00:12:27.081 17:28:53 -- nvmf/common.sh@447 -- # head -n 1 00:12:27.081 17:28:53 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:27.081 17:28:53 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:27.081 17:28:53 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:27.081 17:28:53 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:27.081 17:28:53 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:27.081 17:28:53 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:27.081 17:28:53 -- target/device_removal.sh@235 -- # BOND_NAME=bond_nvmf 00:12:27.081 17:28:53 -- target/device_removal.sh@236 -- # BOND_IP=10.11.11.26 00:12:27.081 17:28:53 -- target/device_removal.sh@237 -- # BOND_MASK=24 00:12:27.081 17:28:53 -- target/device_removal.sh@311 -- # run_test nvmf_device_removal_pci_remove_no_srq test_remove_and_rescan --no-srq 00:12:27.081 17:28:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:27.081 17:28:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:27.081 17:28:53 -- common/autotest_common.sh@10 -- # set +x 00:12:27.341 ************************************ 00:12:27.341 START TEST nvmf_device_removal_pci_remove_no_srq 00:12:27.341 ************************************ 00:12:27.341 17:28:53 -- common/autotest_common.sh@1111 -- # test_remove_and_rescan --no-srq 00:12:27.341 17:28:53 -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:12:27.341 17:28:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:27.341 17:28:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:27.341 17:28:53 -- common/autotest_common.sh@10 -- # set +x 00:12:27.341 17:28:53 -- nvmf/common.sh@470 -- # nvmfpid=1997366 00:12:27.341 17:28:53 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:27.341 17:28:53 -- nvmf/common.sh@471 -- # waitforlisten 1997366 00:12:27.341 17:28:53 -- common/autotest_common.sh@817 -- # '[' -z 1997366 ']' 00:12:27.341 17:28:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.341 17:28:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:27.341 17:28:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.341 17:28:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:27.341 17:28:53 -- common/autotest_common.sh@10 -- # set +x 00:12:27.341 [2024-04-18 17:28:53.745933] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:12:27.341 [2024-04-18 17:28:53.746012] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.341 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.341 [2024-04-18 17:28:53.818179] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:27.600 [2024-04-18 17:28:53.939998] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.600 [2024-04-18 17:28:53.940057] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.600 [2024-04-18 17:28:53.940086] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.600 [2024-04-18 17:28:53.940097] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.600 [2024-04-18 17:28:53.940108] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.600 [2024-04-18 17:28:53.942405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.600 [2024-04-18 17:28:53.942418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.600 17:28:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:27.600 17:28:54 -- common/autotest_common.sh@850 -- # return 0 00:12:27.600 17:28:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:27.600 17:28:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:27.600 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:12:27.600 17:28:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.600 17:28:54 -- target/device_removal.sh@130 -- # create_subsystem_and_connect --no-srq 00:12:27.600 17:28:54 -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:12:27.600 17:28:54 -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:12:27.600 17:28:54 -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 --no-srq 00:12:27.600 17:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.600 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:12:27.600 [2024-04-18 17:28:54.103579] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfbe530/0xfc2a20) succeed. 00:12:27.600 [2024-04-18 17:28:54.115298] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfbfa30/0x10040b0) succeed. 00:12:27.600 17:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.600 17:28:54 -- target/device_removal.sh@49 -- # get_rdma_if_list 00:12:27.600 17:28:54 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:27.600 17:28:54 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:27.600 17:28:54 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:27.600 17:28:54 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:27.600 17:28:54 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:27.600 17:28:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:27.600 17:28:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.600 17:28:54 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:27.600 17:28:54 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:27.600 17:28:54 -- nvmf/common.sh@105 -- # continue 2 00:12:27.600 17:28:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:27.600 17:28:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.600 17:28:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:27.600 17:28:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.600 17:28:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:27.600 17:28:54 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:27.600 17:28:54 -- nvmf/common.sh@105 -- # continue 2 00:12:27.600 17:28:54 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:12:27.600 17:28:54 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:12:27.600 17:28:54 -- target/device_removal.sh@25 -- # local -a dev_name 00:12:27.600 17:28:54 -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:12:27.600 17:28:54 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:12:27.600 17:28:54 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:12:27.600 17:28:54 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:12:27.600 17:28:54 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:12:27.600 17:28:54 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:12:27.858 17:28:54 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:27.858 17:28:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:27.858 17:28:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:27.858 17:28:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:27.858 17:28:54 -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:12:27.858 17:28:54 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:12:27.858 17:28:54 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:12:27.858 17:28:54 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:12:27.858 17:28:54 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:12:27.858 17:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.858 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:12:27.858 17:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.858 17:28:54 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:12:27.858 17:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.858 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:12:27.858 17:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.858 17:28:54 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:12:27.858 17:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.858 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:12:27.858 17:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.858 17:28:54 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:12:27.858 17:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.858 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:12:27.858 [2024-04-18 17:28:54.227439] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:27.858 17:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.858 17:28:54 -- target/device_removal.sh@41 -- # return 0 00:12:27.858 17:28:54 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:12:27.858 17:28:54 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:12:27.858 17:28:54 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:12:27.858 17:28:54 -- target/device_removal.sh@25 -- # local -a dev_name 00:12:27.858 17:28:54 -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:12:27.858 17:28:54 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:12:27.858 17:28:54 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:12:27.858 17:28:54 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:12:27.858 17:28:54 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:12:27.858 17:28:54 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:12:27.858 17:28:54 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:27.858 17:28:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:27.858 17:28:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:27.858 17:28:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:27.858 17:28:54 -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:12:27.858 17:28:54 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:12:27.858 17:28:54 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:12:27.858 17:28:54 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:12:27.858 17:28:54 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:12:27.858 17:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.858 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:12:27.858 17:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.858 17:28:54 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:12:27.858 17:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.858 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:12:27.858 17:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.858 17:28:54 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:12:27.858 17:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.858 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:12:27.858 17:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.858 17:28:54 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:12:27.858 17:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.858 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:12:27.858 [2024-04-18 17:28:54.305529] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:12:27.858 17:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.858 17:28:54 -- target/device_removal.sh@41 -- # return 0 00:12:27.859 17:28:54 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:12:27.859 17:28:54 -- target/device_removal.sh@53 -- # return 0 00:12:27.859 17:28:54 -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:12:27.859 17:28:54 -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:12:27.859 17:28:54 -- target/device_removal.sh@87 -- # local dev_names 00:12:27.859 17:28:54 -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:27.859 17:28:54 -- target/device_removal.sh@91 -- # bdevperf_pid=1997419 00:12:27.859 17:28:54 -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:12:27.859 17:28:54 -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:27.859 17:28:54 -- target/device_removal.sh@94 -- # waitforlisten 1997419 /var/tmp/bdevperf.sock 00:12:27.859 17:28:54 -- common/autotest_common.sh@817 -- # '[' -z 1997419 ']' 00:12:27.859 17:28:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:27.859 17:28:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:27.859 17:28:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:27.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:27.859 17:28:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:27.859 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:12:28.117 17:28:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:28.117 17:28:54 -- common/autotest_common.sh@850 -- # return 0 00:12:28.117 17:28:54 -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:12:28.117 17:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:28.117 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:12:28.117 17:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:28.117 17:28:54 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:12:28.117 17:28:54 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:12:28.117 17:28:54 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:12:28.117 17:28:54 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:12:28.117 17:28:54 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:12:28.117 17:28:54 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:28.118 17:28:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:28.118 17:28:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:28.118 17:28:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:28.118 17:28:54 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:12:28.118 17:28:54 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:12:28.118 17:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:28.118 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:12:28.376 Nvme_mlx_0_0n1 00:12:28.376 17:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:28.376 17:28:54 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:12:28.376 17:28:54 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:12:28.376 17:28:54 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:12:28.376 17:28:54 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:12:28.376 17:28:54 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:12:28.376 17:28:54 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:28.376 17:28:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:28.376 17:28:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:28.376 17:28:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:28.376 17:28:54 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:12:28.376 17:28:54 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:12:28.376 17:28:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:28.376 17:28:54 -- common/autotest_common.sh@10 -- # set +x 00:12:28.376 Nvme_mlx_0_1n1 00:12:28.376 17:28:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:28.376 17:28:54 -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=1997549 00:12:28.376 17:28:54 -- target/device_removal.sh@112 -- # sleep 5 00:12:28.376 17:28:54 -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:12:33.649 17:28:59 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:12:33.649 17:28:59 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:12:33.649 17:28:59 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:12:33.649 17:28:59 -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:12:33.649 17:28:59 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:12:33.649 17:28:59 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:12:33.649 17:28:59 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:12:33.649 17:28:59 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0/infiniband 00:12:33.649 17:28:59 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:12:33.649 17:28:59 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:12:33.649 17:28:59 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:33.649 17:28:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:33.649 17:28:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:33.649 17:28:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:33.649 17:28:59 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:12:33.649 17:28:59 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:12:33.649 17:28:59 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:12:33.649 17:28:59 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:12:33.649 17:28:59 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0 00:12:33.649 17:28:59 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:12:33.649 17:28:59 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:12:33.649 17:28:59 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:12:33.649 17:28:59 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:12:33.649 17:28:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.649 17:28:59 -- common/autotest_common.sh@10 -- # set +x 00:12:33.649 17:28:59 -- target/device_removal.sh@77 -- # grep mlx5_0 00:12:33.649 17:28:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.649 mlx5_0 00:12:33.649 17:28:59 -- target/device_removal.sh@78 -- # return 0 00:12:33.649 17:28:59 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:12:33.649 17:28:59 -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:12:33.649 17:28:59 -- target/device_removal.sh@67 -- # echo 1 00:12:33.649 17:28:59 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:12:33.649 17:28:59 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:12:33.649 17:28:59 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:12:33.649 [2024-04-18 17:28:59.911771] rdma.c:3610:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:12:33.649 [2024-04-18 17:28:59.911953] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:12:33.649 [2024-04-18 17:28:59.912076] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:12:33.649 [2024-04-18 17:28:59.912098] rdma.c: 916:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 95 00:12:33.649 [2024-04-18 17:28:59.912110] rdma.c: 703:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:12:33.649 [2024-04-18 17:28:59.912121] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912131] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912155] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912167] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912177] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912187] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912197] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912207] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912217] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912227] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912237] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912247] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912257] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912267] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912277] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912287] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912297] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912307] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912317] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912327] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912337] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912347] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912357] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912407] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912423] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912433] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912444] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912454] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912464] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912475] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912485] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912495] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912505] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912515] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912525] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912535] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912545] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912555] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912565] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912576] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912586] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:33.649 [2024-04-18 17:28:59.912596] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:33.649 [2024-04-18 17:28:59.912606] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912616] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912627] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912637] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912647] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912657] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912667] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912701] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912711] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912721] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.649 [2024-04-18 17:28:59.912730] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.649 [2024-04-18 17:28:59.912740] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.912750] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.912760] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.912770] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:33.650 [2024-04-18 17:28:59.912780] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:33.650 [2024-04-18 17:28:59.912790] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.912800] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.912810] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.912820] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.912830] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.912841] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.912851] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.912864] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.912875] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.912885] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.912896] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.912906] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.912916] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.912926] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.912936] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.912946] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.912955] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.912965] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.912976] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.912985] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.912995] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913005] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913014] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913024] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913034] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913043] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913053] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913063] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913073] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:33.650 [2024-04-18 17:28:59.913083] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:33.650 [2024-04-18 17:28:59.913093] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913103] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913112] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913122] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913133] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913143] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913153] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913162] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913172] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913182] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913192] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913201] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913211] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913221] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913230] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913241] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913251] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913260] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913270] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913283] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913294] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913303] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913313] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:33.650 [2024-04-18 17:28:59.913323] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:33.650 [2024-04-18 17:28:59.913333] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913343] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913352] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913394] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913407] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913417] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913428] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913438] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913448] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:33.650 [2024-04-18 17:28:59.913458] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:33.650 [2024-04-18 17:28:59.913468] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913478] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913488] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913498] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913508] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913519] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913529] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913539] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913549] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913560] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913570] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913580] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913591] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913604] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913614] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913624] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913634] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913645] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913654] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913665] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913694] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913704] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913714] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:33.650 [2024-04-18 17:28:59.913723] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:33.650 [2024-04-18 17:28:59.913733] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.650 [2024-04-18 17:28:59.913743] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.650 [2024-04-18 17:28:59.913756] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.913766] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.913776] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:33.651 [2024-04-18 17:28:59.913786] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:33.651 [2024-04-18 17:28:59.913796] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.913806] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.913817] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.913826] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.913836] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.913846] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.913856] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.913865] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.913875] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.913885] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.913895] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.913904] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.913914] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.913924] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.913934] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.913944] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.913953] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.913963] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.913973] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.913983] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.913993] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.914003] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.914013] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.914023] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.914033] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.914042] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.914052] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.914061] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.914071] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:33.651 [2024-04-18 17:28:59.914081] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:33.651 [2024-04-18 17:28:59.914091] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.914101] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.914110] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.914120] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.914130] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.914139] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:33.651 [2024-04-18 17:28:59.914150] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:12:33.651 [2024-04-18 17:28:59.914159] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:12:36.978 17:29:03 -- target/device_removal.sh@147 -- # seq 1 10 00:12:36.978 17:29:03 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:12:36.978 17:29:03 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:12:36.978 17:29:03 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:12:36.978 17:29:03 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:12:36.978 17:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:36.978 17:29:03 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:12:36.978 17:29:03 -- target/device_removal.sh@77 -- # grep mlx5_0 00:12:36.978 17:29:03 -- common/autotest_common.sh@10 -- # set +x 00:12:36.978 17:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:36.978 17:29:03 -- target/device_removal.sh@78 -- # return 1 00:12:36.978 17:29:03 -- target/device_removal.sh@149 -- # break 00:12:36.978 17:29:03 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:12:36.978 17:29:03 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:12:36.978 17:29:03 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:12:36.978 17:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:36.978 17:29:03 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:12:36.978 17:29:03 -- common/autotest_common.sh@10 -- # set +x 00:12:36.978 17:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:36.978 17:29:03 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:12:36.978 17:29:03 -- target/device_removal.sh@160 -- # rescan_pci 00:12:36.978 17:29:03 -- target/device_removal.sh@57 -- # echo 1 00:12:37.915 17:29:04 -- target/device_removal.sh@162 -- # seq 1 10 00:12:37.915 17:29:04 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:12:37.915 17:29:04 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0/net 00:12:37.915 17:29:04 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:12:37.915 17:29:04 -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:12:37.915 17:29:04 -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:12:37.915 17:29:04 -- target/device_removal.sh@171 -- # break 00:12:37.915 17:29:04 -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:12:37.915 17:29:04 -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:12:37.915 [2024-04-18 17:29:04.377126] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfbe530/0xfc2a20) succeed. 00:12:37.915 [2024-04-18 17:29:04.377230] rdma.c:3367:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:12:38.855 17:29:05 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:12:38.855 17:29:05 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:38.855 17:29:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:38.855 17:29:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:38.855 17:29:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:38.855 17:29:05 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:12:38.855 17:29:05 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:12:38.855 17:29:05 -- target/device_removal.sh@186 -- # seq 1 10 00:12:38.855 17:29:05 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:12:38.855 17:29:05 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:12:38.855 17:29:05 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:12:38.856 17:29:05 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:12:38.856 17:29:05 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:12:38.856 17:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.856 17:29:05 -- common/autotest_common.sh@10 -- # set +x 00:12:38.856 [2024-04-18 17:29:05.302476] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:38.856 [2024-04-18 17:29:05.302523] rdma.c:3373:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:12:38.856 [2024-04-18 17:29:05.302547] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:12:38.856 [2024-04-18 17:29:05.302566] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:12:38.856 17:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.856 17:29:05 -- target/device_removal.sh@187 -- # ib_count=2 00:12:38.856 17:29:05 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:12:38.856 17:29:05 -- target/device_removal.sh@189 -- # break 00:12:38.856 17:29:05 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:12:38.856 17:29:05 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:12:38.856 17:29:05 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:12:38.856 17:29:05 -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:12:38.856 17:29:05 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:12:38.856 17:29:05 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:12:38.856 17:29:05 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:12:38.856 17:29:05 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1/infiniband 00:12:38.856 17:29:05 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:12:38.856 17:29:05 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:12:38.856 17:29:05 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:38.856 17:29:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:38.856 17:29:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:38.856 17:29:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:38.856 17:29:05 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:12:38.856 17:29:05 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:12:38.856 17:29:05 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:12:38.856 17:29:05 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:12:38.856 17:29:05 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1 00:12:38.856 17:29:05 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:12:38.856 17:29:05 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:12:38.856 17:29:05 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:12:38.856 17:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.856 17:29:05 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:12:38.856 17:29:05 -- common/autotest_common.sh@10 -- # set +x 00:12:38.856 17:29:05 -- target/device_removal.sh@77 -- # grep mlx5_1 00:12:38.856 17:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.856 mlx5_1 00:12:38.856 17:29:05 -- target/device_removal.sh@78 -- # return 0 00:12:38.856 17:29:05 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:12:38.856 17:29:05 -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:12:38.856 17:29:05 -- target/device_removal.sh@67 -- # echo 1 00:12:38.856 17:29:05 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:12:38.856 17:29:05 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:12:38.856 17:29:05 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:12:38.856 [2024-04-18 17:29:05.387240] rdma.c:3610:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:12:38.856 [2024-04-18 17:29:05.387359] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:12:38.856 [2024-04-18 17:29:05.391746] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:12:38.856 [2024-04-18 17:29:05.391771] rdma.c: 916:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 95 00:12:38.856 [2024-04-18 17:29:05.391785] rdma.c: 703:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:12:38.856 [2024-04-18 17:29:05.391797] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.391807] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.856 [2024-04-18 17:29:05.391819] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.391829] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.856 [2024-04-18 17:29:05.391840] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.391850] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.856 [2024-04-18 17:29:05.391860] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.391870] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.856 [2024-04-18 17:29:05.391881] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.391891] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.856 [2024-04-18 17:29:05.391907] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.391932] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.856 [2024-04-18 17:29:05.391941] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.391951] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.856 [2024-04-18 17:29:05.391960] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.391970] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.856 [2024-04-18 17:29:05.391994] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.392004] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.856 [2024-04-18 17:29:05.392014] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.392024] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.856 [2024-04-18 17:29:05.392035] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.392044] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.856 [2024-04-18 17:29:05.392070] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.392080] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.856 [2024-04-18 17:29:05.392100] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.392110] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.856 [2024-04-18 17:29:05.392119] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.392129] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.856 [2024-04-18 17:29:05.392139] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.392149] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.856 [2024-04-18 17:29:05.392167] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.392178] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.856 [2024-04-18 17:29:05.392188] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.856 [2024-04-18 17:29:05.392198] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392208] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392218] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392228] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392238] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392248] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392258] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392268] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392278] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392288] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392299] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392311] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392321] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392331] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392341] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392377] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392399] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392447] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392458] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392472] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392483] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392494] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392504] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392514] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392524] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392534] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392544] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392554] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392564] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392574] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392584] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392594] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392604] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392615] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392625] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392635] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392645] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392655] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392665] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392683] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392693] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392703] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392713] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392723] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392733] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392747] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392757] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392768] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392778] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392788] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392798] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392809] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392821] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392846] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392856] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392866] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392876] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392885] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392912] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392942] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392961] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.392975] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.392986] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.393000] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.393013] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.393029] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.393043] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.393056] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.393067] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.393077] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.393087] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.393096] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.393106] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.393126] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.393136] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.393146] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.393156] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.393166] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.393176] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.393190] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.393200] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.393210] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.393220] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.393230] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.393241] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.393251] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.393261] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.393271] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.393281] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.393292] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.393302] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.857 [2024-04-18 17:29:05.393312] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.857 [2024-04-18 17:29:05.393322] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393332] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393342] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393352] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393362] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393376] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393393] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393403] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393414] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393424] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393437] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393448] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393458] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393469] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393479] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393488] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393498] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393508] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393518] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393529] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393539] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393549] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393559] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393569] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393578] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393588] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393598] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393608] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393618] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393628] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393638] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393649] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393659] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393675] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393685] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393696] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393706] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393716] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393726] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393743] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393753] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393763] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393773] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393783] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393793] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393803] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393813] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393823] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393833] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393843] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393853] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393863] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393877] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393887] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393897] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393907] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393917] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393927] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393937] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393947] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393957] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393967] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393976] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:38.858 [2024-04-18 17:29:05.393987] rdma.c: 689:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:12:38.858 [2024-04-18 17:29:05.393996] rdma.c: 691:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:12:43.054 17:29:08 -- target/device_removal.sh@147 -- # seq 1 10 00:12:43.054 17:29:08 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:12:43.054 17:29:08 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:12:43.054 17:29:08 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:12:43.054 17:29:08 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:12:43.054 17:29:08 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:12:43.054 17:29:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.054 17:29:08 -- common/autotest_common.sh@10 -- # set +x 00:12:43.054 17:29:08 -- target/device_removal.sh@77 -- # grep mlx5_1 00:12:43.054 17:29:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.054 17:29:08 -- target/device_removal.sh@78 -- # return 1 00:12:43.054 17:29:08 -- target/device_removal.sh@149 -- # break 00:12:43.054 17:29:08 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:12:43.054 17:29:08 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:12:43.054 17:29:08 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:12:43.054 17:29:08 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:12:43.054 17:29:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.054 17:29:08 -- common/autotest_common.sh@10 -- # set +x 00:12:43.054 17:29:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.054 17:29:09 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:12:43.054 17:29:09 -- target/device_removal.sh@160 -- # rescan_pci 00:12:43.054 17:29:09 -- target/device_removal.sh@57 -- # echo 1 00:12:43.313 [2024-04-18 17:29:09.851791] rdma.c:3314:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x11f6d70, err 11. Skip rescan. 00:12:43.572 17:29:09 -- target/device_removal.sh@162 -- # seq 1 10 00:12:43.572 17:29:09 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:12:43.572 17:29:09 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1/net 00:12:43.572 17:29:09 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:12:43.572 17:29:09 -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:12:43.572 17:29:09 -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:12:43.572 17:29:09 -- target/device_removal.sh@171 -- # break 00:12:43.572 17:29:09 -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:12:43.572 17:29:09 -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:12:43.572 [2024-04-18 17:29:09.953907] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1045750/0x10040b0) succeed. 00:12:43.572 [2024-04-18 17:29:09.954021] rdma.c:3367:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:12:44.510 17:29:10 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:12:44.510 17:29:10 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:44.510 17:29:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:44.510 17:29:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:44.510 17:29:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:44.510 17:29:10 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:12:44.510 17:29:10 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:12:44.510 17:29:10 -- target/device_removal.sh@186 -- # seq 1 10 00:12:44.510 17:29:10 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:12:44.510 17:29:10 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:12:44.510 17:29:10 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:12:44.510 17:29:10 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:12:44.510 17:29:10 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:12:44.511 17:29:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:44.511 17:29:10 -- common/autotest_common.sh@10 -- # set +x 00:12:44.511 [2024-04-18 17:29:10.859333] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:12:44.511 [2024-04-18 17:29:10.859373] rdma.c:3373:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:12:44.511 [2024-04-18 17:29:10.859409] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:12:44.511 [2024-04-18 17:29:10.859437] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:12:44.511 17:29:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:44.511 17:29:10 -- target/device_removal.sh@187 -- # ib_count=2 00:12:44.511 17:29:10 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:12:44.511 17:29:10 -- target/device_removal.sh@189 -- # break 00:12:44.511 17:29:10 -- target/device_removal.sh@200 -- # stop_bdevperf 00:12:44.511 17:29:10 -- target/device_removal.sh@116 -- # wait 1997549 00:14:05.963 0 00:14:05.963 17:30:25 -- target/device_removal.sh@118 -- # killprocess 1997419 00:14:05.963 17:30:25 -- common/autotest_common.sh@936 -- # '[' -z 1997419 ']' 00:14:05.963 17:30:25 -- common/autotest_common.sh@940 -- # kill -0 1997419 00:14:05.963 17:30:25 -- common/autotest_common.sh@941 -- # uname 00:14:05.963 17:30:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:05.963 17:30:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1997419 00:14:05.963 17:30:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:05.963 17:30:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:05.963 17:30:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1997419' 00:14:05.963 killing process with pid 1997419 00:14:05.963 17:30:25 -- common/autotest_common.sh@955 -- # kill 1997419 00:14:05.963 17:30:25 -- common/autotest_common.sh@960 -- # wait 1997419 00:14:05.963 17:30:25 -- target/device_removal.sh@119 -- # bdevperf_pid= 00:14:05.963 17:30:25 -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:14:05.963 [2024-04-18 17:28:54.349966] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:14:05.963 [2024-04-18 17:28:54.350046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1997419 ] 00:14:05.963 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.963 [2024-04-18 17:28:54.413239] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.963 [2024-04-18 17:28:54.518745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.963 Running I/O for 90 seconds... 00:14:05.963 [2024-04-18 17:28:59.912753] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:05.963 [2024-04-18 17:28:59.912794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.963 [2024-04-18 17:28:59.912814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32646 cdw0:16 sqhd:95dc p:0 m:0 dnr:0 00:14:05.963 [2024-04-18 17:28:59.912829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.963 [2024-04-18 17:28:59.912844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32646 cdw0:16 sqhd:95dc p:0 m:0 dnr:0 00:14:05.963 [2024-04-18 17:28:59.912858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.963 [2024-04-18 17:28:59.912871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32646 cdw0:16 sqhd:95dc p:0 m:0 dnr:0 00:14:05.963 [2024-04-18 17:28:59.912885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.963 [2024-04-18 17:28:59.912899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32646 cdw0:16 sqhd:95dc p:0 m:0 dnr:0 00:14:05.963 [2024-04-18 17:28:59.913052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:05.963 [2024-04-18 17:28:59.913074] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:14:05.963 [2024-04-18 17:28:59.913109] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:05.963 [2024-04-18 17:28:59.922755] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.963 [2024-04-18 17:28:59.932780] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.963 [2024-04-18 17:28:59.942808] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.963 [2024-04-18 17:28:59.952836] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.963 [2024-04-18 17:28:59.962860] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:28:59.972887] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:28:59.982912] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:28:59.992942] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.002972] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.013005] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.023478] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.034191] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.044893] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.054918] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.064946] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.074972] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.084998] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.095027] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.105349] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.115840] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.125895] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.135939] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.145963] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.155990] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.166105] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.176451] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.186667] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.198042] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.208594] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.219420] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.229613] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.239734] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.251013] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.261037] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.271389] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.281763] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.291789] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.301817] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.311847] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.321885] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.332883] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.342908] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.353437] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.364346] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.374395] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.384418] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.394444] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.404469] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.414497] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.424968] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.435151] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.446096] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.456122] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.466150] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.476178] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.486208] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.496234] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.506264] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.516289] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.527226] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.537344] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.547368] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.558146] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.568173] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.578676] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.588700] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.598726] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.608750] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.618884] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.629155] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.639179] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.650110] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.660137] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.670162] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.680187] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.690437] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.700460] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.710543] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.721323] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.731720] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.743207] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.753598] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.764041] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.775027] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.785096] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.795121] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.805217] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.815589] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.825739] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.835766] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.846029] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.964 [2024-04-18 17:29:00.856962] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.965 [2024-04-18 17:29:00.867048] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.965 [2024-04-18 17:29:00.877085] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.965 [2024-04-18 17:29:00.888095] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.965 [2024-04-18 17:29:00.900092] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.965 [2024-04-18 17:29:00.910124] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.965 [2024-04-18 17:29:00.915590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:133056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.965 [2024-04-18 17:29:00.915619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.915650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:133064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.965 [2024-04-18 17:29:00.915666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.915693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:133072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.965 [2024-04-18 17:29:00.915708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.915724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:133080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.965 [2024-04-18 17:29:00.915737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.915753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:133088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.965 [2024-04-18 17:29:00.915767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.915782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:133096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.965 [2024-04-18 17:29:00.915796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.915811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:133104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.965 [2024-04-18 17:29:00.915824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.915841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:133112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.965 [2024-04-18 17:29:00.915854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.915870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:132096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fe000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.915885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.915902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:132104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fc000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.915916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.915932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:132112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fa000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.915946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.915961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:132120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f8000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.915975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.915991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:132128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f6000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:132136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f4000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:132144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f2000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:132152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f0000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:132160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ee000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:132168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ec000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:132176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ea000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:132184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e8000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:132192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e6000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:132200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e4000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:132208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e2000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:132216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e0000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:132224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077de000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:132232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077dc000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:132240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077da000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:132248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d8000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:132256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d6000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:132264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d4000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.965 [2024-04-18 17:29:00.916553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:132272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d2000 len:0x1000 key:0x185b00 00:14:05.965 [2024-04-18 17:29:00.916567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.916583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:132280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d0000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.916597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.916613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:132288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ce000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.916627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.916643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:132296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077cc000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.916657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.916673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:132304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ca000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.916687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.916703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:132312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c8000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.916717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.916737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:132320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c6000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.916753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.916768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:132328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c4000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.916783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.916799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:132336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c2000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.916813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.916829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:132344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c0000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.916843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.916859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:132352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077be000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.916873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.916888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:132360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077bc000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.916902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.916918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:132368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ba000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.916932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.916947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:132376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.916961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.916977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:132384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.916991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:132392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:132400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:132408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:132416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:132424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:132432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:132440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:132448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:132456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:132464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:132472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:132480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:132488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:132496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:132504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:132512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:132520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:132528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:132536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:132544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:132552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x185b00 00:14:05.966 [2024-04-18 17:29:00.917652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.966 [2024-04-18 17:29:00.917668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:132560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.917683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.917699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:132568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.917713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.917729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:132576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.917743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.917759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:132584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.917777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.917794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:132592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.917809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.917826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:132600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.917841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.917857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:132608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.917871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.917888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:132616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.917903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.917919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:132624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.917933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.917949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:132632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.917964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.917980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:132640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.917994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:132648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:132656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:132664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:132672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:132680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:132688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:132696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:132704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:132712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:132720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:132728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:132736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:132744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:132752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:132760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:132768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:132776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:132784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:132792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:132800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:132808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:132816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:132824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:132832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:132840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x185b00 00:14:05.967 [2024-04-18 17:29:00.918757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.967 [2024-04-18 17:29:00.918772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:132848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.918786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.918809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:132856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.918824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.918840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:132864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.918854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.918870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:132872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.918884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.918899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:132880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.918913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.918929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:132888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.918942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.918958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:132896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.918972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.918988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:132904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:132912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:132920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:132928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:132936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:132944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:132952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:132960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:132968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:132976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:132984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:132992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:133000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:133008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:133016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:133024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:133032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.919517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:133040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x185b00 00:14:05.968 [2024-04-18 17:29:00.919531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.934842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:05.968 [2024-04-18 17:29:00.934866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:05.968 [2024-04-18 17:29:00.934880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:133048 len:8 PRP1 0x0 PRP2 0x0 00:14:05.968 [2024-04-18 17:29:00.934894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.968 [2024-04-18 17:29:00.937953] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:14:05.968 [2024-04-18 17:29:00.938341] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:14:05.968 [2024-04-18 17:29:00.938367] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:05.968 [2024-04-18 17:29:00.938387] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:14:05.968 [2024-04-18 17:29:00.938418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:05.968 [2024-04-18 17:29:00.938435] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:14:05.968 [2024-04-18 17:29:00.938456] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:14:05.968 [2024-04-18 17:29:00.938471] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:14:05.968 [2024-04-18 17:29:00.938487] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:14:05.968 [2024-04-18 17:29:00.938520] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:05.969 [2024-04-18 17:29:00.938538] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:14:05.969 [2024-04-18 17:29:02.945390] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:05.969 [2024-04-18 17:29:02.945447] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:14:05.969 [2024-04-18 17:29:02.945486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:05.969 [2024-04-18 17:29:02.945505] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:14:05.969 [2024-04-18 17:29:02.945527] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:14:05.969 [2024-04-18 17:29:02.945542] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:14:05.969 [2024-04-18 17:29:02.945557] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:14:05.969 [2024-04-18 17:29:02.945593] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:05.969 [2024-04-18 17:29:02.945612] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:14:05.969 [2024-04-18 17:29:04.950610] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:05.969 [2024-04-18 17:29:04.950691] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:14:05.969 [2024-04-18 17:29:04.950750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:05.969 [2024-04-18 17:29:04.950769] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:14:05.969 [2024-04-18 17:29:04.950793] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:14:05.969 [2024-04-18 17:29:04.950809] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:14:05.969 [2024-04-18 17:29:04.950824] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:14:05.969 [2024-04-18 17:29:04.950875] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:05.969 [2024-04-18 17:29:04.950897] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:14:05.969 [2024-04-18 17:29:05.384726] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:05.969 [2024-04-18 17:29:05.384783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.969 [2024-04-18 17:29:05.384831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32646 cdw0:16 sqhd:95dc p:0 m:0 dnr:0 00:14:05.969 [2024-04-18 17:29:05.384846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.969 [2024-04-18 17:29:05.384874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32646 cdw0:16 sqhd:95dc p:0 m:0 dnr:0 00:14:05.969 [2024-04-18 17:29:05.384888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.969 [2024-04-18 17:29:05.384902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32646 cdw0:16 sqhd:95dc p:0 m:0 dnr:0 00:14:05.969 [2024-04-18 17:29:05.384916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.969 [2024-04-18 17:29:05.384929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32646 cdw0:16 sqhd:95dc p:0 m:0 dnr:0 00:14:05.969 [2024-04-18 17:29:05.391226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:05.969 [2024-04-18 17:29:05.391257] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:14:05.969 [2024-04-18 17:29:05.391312] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:05.969 [2024-04-18 17:29:05.394718] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.404747] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.414775] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.424800] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.434825] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.444850] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.454875] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.464901] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.474929] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.484954] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.494979] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.505004] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.515030] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.525057] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.535083] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.545109] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.555137] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.565165] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.575191] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.585219] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.595244] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.605272] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.615297] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.625324] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.635351] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.645386] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.655406] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.665432] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.675458] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.685483] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.695508] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.705533] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.715560] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.725587] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.735612] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.745639] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.755667] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.765693] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.775719] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.785746] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.795773] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.805799] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.815827] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.825853] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.835879] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.845907] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.855934] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.865960] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.875985] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.886014] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.896041] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.906069] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.916095] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.926121] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.936149] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.946175] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.957222] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.982736] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.992734] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.969 [2024-04-18 17:29:05.996031] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:05.969 [2024-04-18 17:29:06.002762] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.012788] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.022815] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.032842] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.042869] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.052898] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.062923] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.072948] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.082976] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.093004] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.103032] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.113057] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.123082] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.133109] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.143136] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.153165] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.163192] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.173218] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.183246] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.193272] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.203297] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.213323] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.223349] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.233374] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.243401] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.253427] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.263454] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.273480] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.283508] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.293533] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.303561] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.313586] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.323614] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.333642] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.343668] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.353693] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.363720] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.373748] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.383773] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:14:05.970 [2024-04-18 17:29:06.393791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:241696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.393832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.393858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:241704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.393874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.393890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:241712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.393903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.393918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:241720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.393931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.393947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:241728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.393960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.393975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:241736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.393988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:241744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:241752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:241760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:241768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:241776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:241784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:241792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:241800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:241808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:241816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:241824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:241832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:241840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:241848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:241856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:241864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.970 [2024-04-18 17:29:06.394466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:241872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.970 [2024-04-18 17:29:06.394480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:241880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:241888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:241896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:241904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:241912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:241920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:241928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:241936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:241944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:241952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:241960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:241968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:241976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:241984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:241992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:242000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:242008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.394976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.394991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:242016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:242024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:242032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:242040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:242048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:242056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:242064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:242072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:242080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:242088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:242096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:242104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:242112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:242120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:242128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:242136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:242144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:242152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:242160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:242168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:242176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.971 [2024-04-18 17:29:06.395617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:242184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.971 [2024-04-18 17:29:06.395631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.395648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:242192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.395661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.395676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:242200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.395690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.395704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:242208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.395718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.395733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:242216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.395747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.395762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:242224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.395775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.395790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:242232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.395803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.395818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:242240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.395832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.395847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:242248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.395860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.395875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:242256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.395888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.395903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:242264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.395916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.395931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:242272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.395948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.395964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:242280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.395977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.395992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:242288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:242296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:242304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:242312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:242320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:242328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:242336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:242344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:242352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:242360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:242368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:242376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:242384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:242392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:242400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:242408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:242416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:242424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:242432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:242440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:242448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.972 [2024-04-18 17:29:06.396630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.972 [2024-04-18 17:29:06.396645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:242456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.396674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.396690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:242464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.396702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.396736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:242472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.396751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.396767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:242480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.396780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.396795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:242488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.396809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.396823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:242496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.396837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.396852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:242504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.396865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.396880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:242512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.396894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.396909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:242520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.396922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.396937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:242528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.396951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.396966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:242536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.396979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.396994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:242544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:242552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:242560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:242568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:242576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:242584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:242592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:242600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:242608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:242616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:242624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:242632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:242640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:242648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:242656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:242664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:242672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:242680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:05.973 [2024-04-18 17:29:06.397513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:241664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x1c1b00 00:14:05.973 [2024-04-18 17:29:06.397543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:241672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1c1b00 00:14:05.973 [2024-04-18 17:29:06.397573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.397595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:241680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x1c1b00 00:14:05.973 [2024-04-18 17:29:06.397609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32646 cdw0:b0f2ce70 sqhd:fb93 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.413454] rdma_verbs.c: 83:spdk_rdma_qp_destroy: *WARNING*: Destroying qpair with queued Work Requests 00:14:05.973 [2024-04-18 17:29:06.413540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:05.973 [2024-04-18 17:29:06.413559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:05.973 [2024-04-18 17:29:06.413572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:241688 len:8 PRP1 0x0 PRP2 0x0 00:14:05.973 [2024-04-18 17:29:06.413586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.973 [2024-04-18 17:29:06.413655] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:14:05.973 [2024-04-18 17:29:06.413971] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:14:05.973 [2024-04-18 17:29:06.413994] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:05.973 [2024-04-18 17:29:06.414006] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:14:05.973 [2024-04-18 17:29:06.414031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:05.973 [2024-04-18 17:29:06.414046] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:14:05.973 [2024-04-18 17:29:06.414065] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:14:05.973 [2024-04-18 17:29:06.414079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:14:05.973 [2024-04-18 17:29:06.414093] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:14:05.973 [2024-04-18 17:29:06.414118] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:05.973 [2024-04-18 17:29:06.414140] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:14:05.973 [2024-04-18 17:29:08.422125] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:05.974 [2024-04-18 17:29:08.422176] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:14:05.974 [2024-04-18 17:29:08.422231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:05.974 [2024-04-18 17:29:08.422248] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:14:05.974 [2024-04-18 17:29:08.424146] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:14:05.974 [2024-04-18 17:29:08.424169] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:14:05.974 [2024-04-18 17:29:08.424200] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:14:05.974 [2024-04-18 17:29:08.424269] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:05.974 [2024-04-18 17:29:08.424291] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:14:05.974 [2024-04-18 17:29:10.429338] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:14:05.974 [2024-04-18 17:29:10.429424] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:14:05.974 [2024-04-18 17:29:10.429468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:05.974 [2024-04-18 17:29:10.429487] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:14:05.974 [2024-04-18 17:29:10.429538] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:14:05.974 [2024-04-18 17:29:10.429556] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:14:05.974 [2024-04-18 17:29:10.429573] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:14:05.974 [2024-04-18 17:29:10.429620] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:05.974 [2024-04-18 17:29:10.429641] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:14:05.974 [2024-04-18 17:29:11.486826] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:05.974 00:14:05.974 Latency(us) 00:14:05.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.974 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:05.974 Verification LBA range: start 0x0 length 0x8000 00:14:05.974 Nvme_mlx_0_0n1 : 90.01 9492.76 37.08 0.00 0.00 13459.80 163.84 7058858.29 00:14:05.974 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:05.974 Verification LBA range: start 0x0 length 0x8000 00:14:05.974 Nvme_mlx_0_1n1 : 90.01 9093.02 35.52 0.00 0.00 14052.35 2111.72 7058858.29 00:14:05.974 =================================================================================================================== 00:14:05.974 Total : 18585.78 72.60 0.00 0.00 13749.71 163.84 7058858.29 00:14:05.974 Received shutdown signal, test time was about 90.000000 seconds 00:14:05.974 00:14:05.974 Latency(us) 00:14:05.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.974 =================================================================================================================== 00:14:05.974 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:05.974 17:30:25 -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:14:05.974 17:30:25 -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:14:05.974 17:30:25 -- target/device_removal.sh@202 -- # killprocess 1997366 00:14:05.974 17:30:25 -- common/autotest_common.sh@936 -- # '[' -z 1997366 ']' 00:14:05.974 17:30:25 -- common/autotest_common.sh@940 -- # kill -0 1997366 00:14:05.974 17:30:25 -- common/autotest_common.sh@941 -- # uname 00:14:05.974 17:30:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:05.974 17:30:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1997366 00:14:05.974 17:30:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:05.974 17:30:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:05.974 17:30:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1997366' 00:14:05.974 killing process with pid 1997366 00:14:05.974 17:30:25 -- common/autotest_common.sh@955 -- # kill 1997366 00:14:05.974 17:30:25 -- common/autotest_common.sh@960 -- # wait 1997366 00:14:05.974 17:30:25 -- target/device_removal.sh@203 -- # nvmfpid= 00:14:05.974 17:30:25 -- target/device_removal.sh@205 -- # return 0 00:14:05.974 00:14:05.974 real 1m32.277s 00:14:05.974 user 4m29.167s 00:14:05.974 sys 0m2.479s 00:14:05.974 17:30:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:05.974 17:30:25 -- common/autotest_common.sh@10 -- # set +x 00:14:05.974 ************************************ 00:14:05.974 END TEST nvmf_device_removal_pci_remove_no_srq 00:14:05.974 ************************************ 00:14:05.974 17:30:26 -- target/device_removal.sh@312 -- # run_test nvmf_device_removal_pci_remove test_remove_and_rescan 00:14:05.974 17:30:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:05.974 17:30:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:05.974 17:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.974 ************************************ 00:14:05.974 START TEST nvmf_device_removal_pci_remove 00:14:05.974 ************************************ 00:14:05.974 17:30:26 -- common/autotest_common.sh@1111 -- # test_remove_and_rescan 00:14:05.974 17:30:26 -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:14:05.974 17:30:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:05.974 17:30:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:05.974 17:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.974 17:30:26 -- nvmf/common.sh@470 -- # nvmfpid=2008991 00:14:05.974 17:30:26 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:05.974 17:30:26 -- nvmf/common.sh@471 -- # waitforlisten 2008991 00:14:05.974 17:30:26 -- common/autotest_common.sh@817 -- # '[' -z 2008991 ']' 00:14:05.974 17:30:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.974 17:30:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:05.974 17:30:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.974 17:30:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:05.974 17:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.974 [2024-04-18 17:30:26.142174] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:14:05.974 [2024-04-18 17:30:26.142258] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.974 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.974 [2024-04-18 17:30:26.200571] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:05.974 [2024-04-18 17:30:26.304572] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.974 [2024-04-18 17:30:26.304631] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.974 [2024-04-18 17:30:26.304661] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.974 [2024-04-18 17:30:26.304673] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.974 [2024-04-18 17:30:26.304684] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.974 [2024-04-18 17:30:26.304831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.974 [2024-04-18 17:30:26.304850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.974 17:30:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:05.974 17:30:26 -- common/autotest_common.sh@850 -- # return 0 00:14:05.974 17:30:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:05.974 17:30:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:05.974 17:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.974 17:30:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.974 17:30:26 -- target/device_removal.sh@130 -- # create_subsystem_and_connect 00:14:05.974 17:30:26 -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:14:05.974 17:30:26 -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:14:05.974 17:30:26 -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:05.974 17:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.974 17:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.974 [2024-04-18 17:30:26.465395] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1781530/0x1785a20) succeed. 00:14:05.974 [2024-04-18 17:30:26.475662] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1782a30/0x17c70b0) succeed. 00:14:05.974 17:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.974 17:30:26 -- target/device_removal.sh@49 -- # get_rdma_if_list 00:14:05.974 17:30:26 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:05.974 17:30:26 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:05.974 17:30:26 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:05.974 17:30:26 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:05.974 17:30:26 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:05.974 17:30:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:05.974 17:30:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:05.974 17:30:26 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:05.974 17:30:26 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:05.974 17:30:26 -- nvmf/common.sh@105 -- # continue 2 00:14:05.974 17:30:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:05.974 17:30:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:05.974 17:30:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:05.974 17:30:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:05.974 17:30:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:05.974 17:30:26 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:05.974 17:30:26 -- nvmf/common.sh@105 -- # continue 2 00:14:05.974 17:30:26 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:14:05.974 17:30:26 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:14:05.974 17:30:26 -- target/device_removal.sh@25 -- # local -a dev_name 00:14:05.974 17:30:26 -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:14:05.974 17:30:26 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:14:05.974 17:30:26 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:14:05.974 17:30:26 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:14:05.974 17:30:26 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:14:05.975 17:30:26 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:14:05.975 17:30:26 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:05.975 17:30:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:05.975 17:30:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:05.975 17:30:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:05.975 17:30:26 -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:14:05.975 17:30:26 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:14:05.975 17:30:26 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:14:05.975 17:30:26 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:14:05.975 17:30:26 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:14:05.975 17:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.975 17:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.975 17:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.975 17:30:26 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:14:05.975 17:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.975 17:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.975 17:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.975 17:30:26 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:14:05.975 17:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.975 17:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.975 17:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.975 17:30:26 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:14:05.975 17:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.975 17:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.975 [2024-04-18 17:30:26.674754] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:05.975 17:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.975 17:30:26 -- target/device_removal.sh@41 -- # return 0 00:14:05.975 17:30:26 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:14:05.975 17:30:26 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:14:05.975 17:30:26 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:14:05.975 17:30:26 -- target/device_removal.sh@25 -- # local -a dev_name 00:14:05.975 17:30:26 -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:14:05.975 17:30:26 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:14:05.975 17:30:26 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:14:05.975 17:30:26 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:14:05.975 17:30:26 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:14:05.975 17:30:26 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:14:05.975 17:30:26 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:05.975 17:30:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:05.975 17:30:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:05.975 17:30:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:05.975 17:30:26 -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:14:05.975 17:30:26 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:14:05.975 17:30:26 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:14:05.975 17:30:26 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:14:05.975 17:30:26 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:14:05.975 17:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.975 17:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.975 17:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.975 17:30:26 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:14:05.975 17:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.975 17:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.975 17:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.975 17:30:26 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:14:05.975 17:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.975 17:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.975 17:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.975 17:30:26 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:14:05.975 17:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.975 17:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.975 [2024-04-18 17:30:26.755089] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:14:05.975 17:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.975 17:30:26 -- target/device_removal.sh@41 -- # return 0 00:14:05.975 17:30:26 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:14:05.975 17:30:26 -- target/device_removal.sh@53 -- # return 0 00:14:05.975 17:30:26 -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:14:05.975 17:30:26 -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:14:05.975 17:30:26 -- target/device_removal.sh@87 -- # local dev_names 00:14:05.975 17:30:26 -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:05.975 17:30:26 -- target/device_removal.sh@91 -- # bdevperf_pid=2009135 00:14:05.975 17:30:26 -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:05.975 17:30:26 -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:14:05.975 17:30:26 -- target/device_removal.sh@94 -- # waitforlisten 2009135 /var/tmp/bdevperf.sock 00:14:05.975 17:30:26 -- common/autotest_common.sh@817 -- # '[' -z 2009135 ']' 00:14:05.975 17:30:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:05.975 17:30:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:05.975 17:30:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:05.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:05.975 17:30:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:05.975 17:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:05.975 17:30:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:05.975 17:30:27 -- common/autotest_common.sh@850 -- # return 0 00:14:05.975 17:30:27 -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:14:05.975 17:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.975 17:30:27 -- common/autotest_common.sh@10 -- # set +x 00:14:05.975 17:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.975 17:30:27 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:14:05.975 17:30:27 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:14:05.975 17:30:27 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:14:05.975 17:30:27 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:14:05.975 17:30:27 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:14:05.975 17:30:27 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:05.975 17:30:27 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:05.975 17:30:27 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:05.975 17:30:27 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:05.975 17:30:27 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:14:05.975 17:30:27 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:14:05.975 17:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.975 17:30:27 -- common/autotest_common.sh@10 -- # set +x 00:14:05.975 Nvme_mlx_0_0n1 00:14:05.975 17:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.975 17:30:27 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:14:05.975 17:30:27 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:14:05.975 17:30:27 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:14:05.975 17:30:27 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:14:05.975 17:30:27 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:14:05.975 17:30:27 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:05.975 17:30:27 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:05.975 17:30:27 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:05.975 17:30:27 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:05.975 17:30:27 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:14:05.975 17:30:27 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:14:05.975 17:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.975 17:30:27 -- common/autotest_common.sh@10 -- # set +x 00:14:05.975 Nvme_mlx_0_1n1 00:14:05.975 17:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.975 17:30:27 -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=2009227 00:14:05.975 17:30:27 -- target/device_removal.sh@112 -- # sleep 5 00:14:05.975 17:30:27 -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:14:05.975 17:30:32 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:14:05.975 17:30:32 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:14:05.975 17:30:32 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:14:05.975 17:30:32 -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:14:05.975 17:30:32 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:14:05.975 17:30:32 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:14:05.975 17:30:32 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:14:05.975 17:30:32 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0/infiniband 00:14:05.975 17:30:32 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:14:05.975 17:30:32 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:14:05.975 17:30:32 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:05.975 17:30:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:05.975 17:30:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:05.975 17:30:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:05.975 17:30:32 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:14:05.975 17:30:32 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:14:05.975 17:30:32 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:14:05.975 17:30:32 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:14:05.975 17:30:32 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0 00:14:05.975 17:30:32 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:14:05.975 17:30:32 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:14:05.976 17:30:32 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:14:05.976 17:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.976 17:30:32 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:14:05.976 17:30:32 -- common/autotest_common.sh@10 -- # set +x 00:14:05.976 17:30:32 -- target/device_removal.sh@77 -- # grep mlx5_0 00:14:05.976 17:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.976 mlx5_0 00:14:05.976 17:30:32 -- target/device_removal.sh@78 -- # return 0 00:14:05.976 17:30:32 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:14:05.976 17:30:32 -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:14:05.976 17:30:32 -- target/device_removal.sh@67 -- # echo 1 00:14:05.976 17:30:32 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:14:05.976 17:30:32 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:14:05.976 17:30:32 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.0/net/mlx_0_0/device 00:14:05.976 [2024-04-18 17:30:32.367833] rdma.c:3610:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:14:05.976 [2024-04-18 17:30:32.367945] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:05.976 [2024-04-18 17:30:32.370529] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:05.976 [2024-04-18 17:30:32.370561] rdma.c: 916:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 65 00:14:10.164 17:30:35 -- target/device_removal.sh@147 -- # seq 1 10 00:14:10.164 17:30:35 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:14:10.164 17:30:35 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:14:10.164 17:30:35 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:14:10.164 17:30:35 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:14:10.164 17:30:35 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:14:10.164 17:30:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:10.164 17:30:35 -- common/autotest_common.sh@10 -- # set +x 00:14:10.164 17:30:35 -- target/device_removal.sh@77 -- # grep mlx5_0 00:14:10.164 17:30:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:10.164 17:30:35 -- target/device_removal.sh@78 -- # return 1 00:14:10.164 17:30:35 -- target/device_removal.sh@149 -- # break 00:14:10.164 17:30:35 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:14:10.164 17:30:35 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:14:10.164 17:30:35 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:14:10.164 17:30:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:10.164 17:30:35 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:14:10.164 17:30:35 -- common/autotest_common.sh@10 -- # set +x 00:14:10.164 17:30:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:10.164 17:30:35 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:14:10.164 17:30:35 -- target/device_removal.sh@160 -- # rescan_pci 00:14:10.164 17:30:35 -- target/device_removal.sh@57 -- # echo 1 00:14:10.424 [2024-04-18 17:30:36.764310] rdma.c:3314:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x185e950, err 11. Skip rescan. 00:14:10.424 17:30:36 -- target/device_removal.sh@162 -- # seq 1 10 00:14:10.424 17:30:36 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:14:10.424 17:30:36 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.0/net 00:14:10.424 17:30:36 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:14:10.424 17:30:36 -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:14:10.424 17:30:36 -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:14:10.424 17:30:36 -- target/device_removal.sh@171 -- # break 00:14:10.424 17:30:36 -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:14:10.424 17:30:36 -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:14:10.424 [2024-04-18 17:30:36.874340] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17841c0/0x1785a20) succeed. 00:14:10.424 [2024-04-18 17:30:36.874447] rdma.c:3367:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:14:11.362 17:30:37 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:14:11.362 17:30:37 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:11.362 17:30:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:11.362 17:30:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:11.362 17:30:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:11.362 17:30:37 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:14:11.362 17:30:37 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:14:11.362 17:30:37 -- target/device_removal.sh@186 -- # seq 1 10 00:14:11.362 17:30:37 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:14:11.362 17:30:37 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:14:11.362 17:30:37 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:14:11.362 17:30:37 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:14:11.362 17:30:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.362 17:30:37 -- common/autotest_common.sh@10 -- # set +x 00:14:11.362 17:30:37 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:14:11.362 [2024-04-18 17:30:37.803396] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:11.362 [2024-04-18 17:30:37.803432] rdma.c:3373:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:14:11.362 [2024-04-18 17:30:37.803452] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:14:11.362 [2024-04-18 17:30:37.803470] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:14:11.362 17:30:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.362 17:30:37 -- target/device_removal.sh@187 -- # ib_count=2 00:14:11.362 17:30:37 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:14:11.362 17:30:37 -- target/device_removal.sh@189 -- # break 00:14:11.362 17:30:37 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:14:11.362 17:30:37 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:14:11.362 17:30:37 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:14:11.362 17:30:37 -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:14:11.362 17:30:37 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:14:11.362 17:30:37 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:14:11.362 17:30:37 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:14:11.362 17:30:37 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1/infiniband 00:14:11.362 17:30:37 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:14:11.362 17:30:37 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:14:11.362 17:30:37 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:11.362 17:30:37 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:11.362 17:30:37 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:11.362 17:30:37 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:11.362 17:30:37 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:14:11.362 17:30:37 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:14:11.362 17:30:37 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:14:11.362 17:30:37 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:14:11.362 17:30:37 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1 00:14:11.362 17:30:37 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:14:11.362 17:30:37 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:14:11.362 17:30:37 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:14:11.362 17:30:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.362 17:30:37 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:14:11.362 17:30:37 -- common/autotest_common.sh@10 -- # set +x 00:14:11.362 17:30:37 -- target/device_removal.sh@77 -- # grep mlx5_1 00:14:11.362 17:30:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.362 mlx5_1 00:14:11.362 17:30:37 -- target/device_removal.sh@78 -- # return 0 00:14:11.362 17:30:37 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:14:11.362 17:30:37 -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:14:11.362 17:30:37 -- target/device_removal.sh@67 -- # echo 1 00:14:11.362 17:30:37 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:14:11.362 17:30:37 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:14:11.362 17:30:37 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:09:00.1/net/mlx_0_1/device 00:14:11.362 [2024-04-18 17:30:37.897026] rdma.c:3610:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:14:11.363 [2024-04-18 17:30:37.897126] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:11.633 [2024-04-18 17:30:37.902978] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:14:11.633 [2024-04-18 17:30:37.903009] rdma.c: 916:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 96 00:14:14.979 17:30:41 -- target/device_removal.sh@147 -- # seq 1 10 00:14:14.979 17:30:41 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:14:14.979 17:30:41 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:14:14.979 17:30:41 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:14:14.979 17:30:41 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:14:14.979 17:30:41 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:14:14.979 17:30:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.979 17:30:41 -- common/autotest_common.sh@10 -- # set +x 00:14:14.979 17:30:41 -- target/device_removal.sh@77 -- # grep mlx5_1 00:14:14.979 17:30:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.979 17:30:41 -- target/device_removal.sh@78 -- # return 1 00:14:14.979 17:30:41 -- target/device_removal.sh@149 -- # break 00:14:14.979 17:30:41 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:14:14.979 17:30:41 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:14:14.979 17:30:41 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:14:14.979 17:30:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.979 17:30:41 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:14:14.979 17:30:41 -- common/autotest_common.sh@10 -- # set +x 00:14:14.979 17:30:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.239 17:30:41 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:14:15.239 17:30:41 -- target/device_removal.sh@160 -- # rescan_pci 00:14:15.239 17:30:41 -- target/device_removal.sh@57 -- # echo 1 00:14:15.852 [2024-04-18 17:30:42.354422] rdma.c:3314:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0x176d3d0, err 11. Skip rescan. 00:14:16.109 17:30:42 -- target/device_removal.sh@162 -- # seq 1 10 00:14:16.109 17:30:42 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:14:16.109 17:30:42 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:00/0000:00:03.0/0000:09:00.1/net 00:14:16.109 17:30:42 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:14:16.109 17:30:42 -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:14:16.109 17:30:42 -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:14:16.109 17:30:42 -- target/device_removal.sh@171 -- # break 00:14:16.109 17:30:42 -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:14:16.109 17:30:42 -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:14:16.109 [2024-04-18 17:30:42.467085] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1784840/0x17c70b0) succeed. 00:14:16.109 [2024-04-18 17:30:42.467177] rdma.c:3367:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:14:17.049 17:30:43 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:14:17.049 17:30:43 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:17.049 17:30:43 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:17.049 17:30:43 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:17.049 17:30:43 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:17.049 17:30:43 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:14:17.049 17:30:43 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:14:17.049 17:30:43 -- target/device_removal.sh@186 -- # seq 1 10 00:14:17.049 17:30:43 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:14:17.049 17:30:43 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:14:17.049 17:30:43 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:14:17.049 17:30:43 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:14:17.049 17:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:17.049 17:30:43 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:14:17.049 17:30:43 -- common/autotest_common.sh@10 -- # set +x 00:14:17.049 [2024-04-18 17:30:43.355698] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:14:17.049 [2024-04-18 17:30:43.355758] rdma.c:3373:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:14:17.049 [2024-04-18 17:30:43.355801] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:14:17.049 [2024-04-18 17:30:43.355827] rdma.c:3897:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:14:17.049 17:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:17.049 17:30:43 -- target/device_removal.sh@187 -- # ib_count=2 00:14:17.049 17:30:43 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:14:17.049 17:30:43 -- target/device_removal.sh@189 -- # break 00:14:17.049 17:30:43 -- target/device_removal.sh@200 -- # stop_bdevperf 00:14:17.049 17:30:43 -- target/device_removal.sh@116 -- # wait 2009227 00:15:38.536 0 00:15:38.536 17:31:57 -- target/device_removal.sh@118 -- # killprocess 2009135 00:15:38.536 17:31:57 -- common/autotest_common.sh@936 -- # '[' -z 2009135 ']' 00:15:38.536 17:31:57 -- common/autotest_common.sh@940 -- # kill -0 2009135 00:15:38.536 17:31:57 -- common/autotest_common.sh@941 -- # uname 00:15:38.536 17:31:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:38.536 17:31:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2009135 00:15:38.536 17:31:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:38.536 17:31:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:38.536 17:31:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2009135' 00:15:38.536 killing process with pid 2009135 00:15:38.536 17:31:57 -- common/autotest_common.sh@955 -- # kill 2009135 00:15:38.536 17:31:57 -- common/autotest_common.sh@960 -- # wait 2009135 00:15:38.537 17:31:57 -- target/device_removal.sh@119 -- # bdevperf_pid= 00:15:38.537 17:31:57 -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:15:38.537 [2024-04-18 17:30:26.806132] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:15:38.537 [2024-04-18 17:30:26.806224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2009135 ] 00:15:38.537 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.537 [2024-04-18 17:30:26.864748] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.537 [2024-04-18 17:30:26.969280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.537 Running I/O for 90 seconds... 00:15:38.537 [2024-04-18 17:30:32.368971] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:38.537 [2024-04-18 17:30:32.369016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.537 [2024-04-18 17:30:32.369047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32535 cdw0:16 sqhd:05dc p:0 m:0 dnr:0 00:15:38.537 [2024-04-18 17:30:32.369073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.537 [2024-04-18 17:30:32.369098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32535 cdw0:16 sqhd:05dc p:0 m:0 dnr:0 00:15:38.537 [2024-04-18 17:30:32.369123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.537 [2024-04-18 17:30:32.369147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32535 cdw0:16 sqhd:05dc p:0 m:0 dnr:0 00:15:38.537 [2024-04-18 17:30:32.369171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.537 [2024-04-18 17:30:32.369193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32535 cdw0:16 sqhd:05dc p:0 m:0 dnr:0 00:15:38.537 [2024-04-18 17:30:32.370599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:38.537 [2024-04-18 17:30:32.370633] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:38.537 [2024-04-18 17:30:32.370698] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:38.537 [2024-04-18 17:30:32.379120] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.389126] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.399156] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.409179] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.419206] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.429232] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.439259] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.449287] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.459313] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.469342] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.479367] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.489398] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.499421] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.509446] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.519474] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.529520] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.539546] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.549718] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.559744] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.570733] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.581709] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.592742] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.603575] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.613830] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.623877] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.633901] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.643926] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.654329] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.664355] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.674459] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.684790] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.694813] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.704837] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.715721] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.725746] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.736592] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.747252] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.757278] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.767423] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.777448] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.787473] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.797651] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.807675] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.817705] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.827729] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.837756] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.847905] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.857977] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.868141] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.878264] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.888288] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.898321] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.908345] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.918448] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.928937] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.938957] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.949913] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.537 [2024-04-18 17:30:32.960006] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:32.970029] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:32.980058] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:32.990089] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.000117] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.010146] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.020174] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.030201] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.040228] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.050255] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.060283] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.070311] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.080351] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.090364] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.100654] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.110925] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.121542] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.131697] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.142074] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.152129] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.162152] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.172210] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.182234] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.192350] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.203827] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.213853] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.224153] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.234177] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.244261] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.254290] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.264346] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.274371] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.285227] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.295266] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.305430] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.315469] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.325481] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.335731] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.345754] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.355781] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.365808] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.538 [2024-04-18 17:30:33.373189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:131280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.373222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.373267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:131288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.373303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.373333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:131296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.373359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.373393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:131304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.373419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.373444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:131312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.373468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.373495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:131320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.373519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.373546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:131328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.373569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.373595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:131336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.373619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.373647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:131344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.373670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.373697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:131352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.373722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.373751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:131360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.373777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.373804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:131368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.373829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.373856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:131376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.373887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.373914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:131384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.373939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.373966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:131392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.373989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.374017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:131400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.374042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.374070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:131408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.374094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.374124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:131416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x1860ef 00:15:38.538 [2024-04-18 17:30:33.374149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.538 [2024-04-18 17:30:33.374178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:131424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.374202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.374231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:131432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.374255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.374283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:131440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.374307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.374335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:131448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.374359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.374393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:131456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.374420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.374447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:131464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.374477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.374506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:131472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.374530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.374559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:131480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.374584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.374611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:131488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.374636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.374663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:131496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.374689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.374715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:131504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.374741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.374767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:131512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.374793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.374819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:131520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.374845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.374871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:131528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.374897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.374925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:131536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.374950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.374977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:131544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:131552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:131560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:131568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:131576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:131584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:131592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:131600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:131608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:131616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:131624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:131632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:131640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:131648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:131656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:131664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:131672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:131680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:131688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.375959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.375986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:131696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x1860ef 00:15:38.539 [2024-04-18 17:30:33.376010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.539 [2024-04-18 17:30:33.376037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:131704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:131712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:131720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:131728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:131736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:131744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:131752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:131760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:131768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:131776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:131784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:131792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:131800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:131808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:131816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ba000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:131824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077bc000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:131832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077be000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.376964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:131840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c0000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.376988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:131848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c2000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:131856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c4000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:131864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c6000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:131872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c8000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:131880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ca000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:131888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077cc000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:131896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ce000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:131904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d0000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:131912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d2000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:131920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d4000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:131928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d6000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:131936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d8000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:131944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077da000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:131952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077dc000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:131960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077de000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:131968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e0000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.540 [2024-04-18 17:30:33.377877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:131976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e2000 len:0x1000 key:0x1860ef 00:15:38.540 [2024-04-18 17:30:33.377905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.377934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:131984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e4000 len:0x1000 key:0x1860ef 00:15:38.541 [2024-04-18 17:30:33.377960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.377987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:131992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e6000 len:0x1000 key:0x1860ef 00:15:38.541 [2024-04-18 17:30:33.378014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:132000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e8000 len:0x1000 key:0x1860ef 00:15:38.541 [2024-04-18 17:30:33.378072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:132008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ea000 len:0x1000 key:0x1860ef 00:15:38.541 [2024-04-18 17:30:33.378126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:132016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ec000 len:0x1000 key:0x1860ef 00:15:38.541 [2024-04-18 17:30:33.378178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:132024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ee000 len:0x1000 key:0x1860ef 00:15:38.541 [2024-04-18 17:30:33.378231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:132032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f0000 len:0x1000 key:0x1860ef 00:15:38.541 [2024-04-18 17:30:33.378284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:132040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f2000 len:0x1000 key:0x1860ef 00:15:38.541 [2024-04-18 17:30:33.378337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:132048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f4000 len:0x1000 key:0x1860ef 00:15:38.541 [2024-04-18 17:30:33.378399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:132056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f6000 len:0x1000 key:0x1860ef 00:15:38.541 [2024-04-18 17:30:33.378462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:132064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f8000 len:0x1000 key:0x1860ef 00:15:38.541 [2024-04-18 17:30:33.378523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:132072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fa000 len:0x1000 key:0x1860ef 00:15:38.541 [2024-04-18 17:30:33.378577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:132080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fc000 len:0x1000 key:0x1860ef 00:15:38.541 [2024-04-18 17:30:33.378631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:132088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fe000 len:0x1000 key:0x1860ef 00:15:38.541 [2024-04-18 17:30:33.378689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:132096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.541 [2024-04-18 17:30:33.378754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:132104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.541 [2024-04-18 17:30:33.378806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:132112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.541 [2024-04-18 17:30:33.378856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:132120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.541 [2024-04-18 17:30:33.378907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:132128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.541 [2024-04-18 17:30:33.378958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.378987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:132136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.541 [2024-04-18 17:30:33.379010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.379038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:132144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.541 [2024-04-18 17:30:33.379061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.379089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:132152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.541 [2024-04-18 17:30:33.379112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.379140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:132160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.541 [2024-04-18 17:30:33.379165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.379192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:132168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.541 [2024-04-18 17:30:33.379217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.379245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:132176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.541 [2024-04-18 17:30:33.379270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.379296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:132184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.541 [2024-04-18 17:30:33.379323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.379355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:132192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.541 [2024-04-18 17:30:33.379387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.379427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:132200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.541 [2024-04-18 17:30:33.379453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.541 [2024-04-18 17:30:33.379479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:132208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.542 [2024-04-18 17:30:33.379514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.542 [2024-04-18 17:30:33.379539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:132216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.542 [2024-04-18 17:30:33.379565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.542 [2024-04-18 17:30:33.379592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:132224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.542 [2024-04-18 17:30:33.379618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.542 [2024-04-18 17:30:33.379645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:132232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.542 [2024-04-18 17:30:33.379670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.542 [2024-04-18 17:30:33.379698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:132240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.542 [2024-04-18 17:30:33.379724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.542 [2024-04-18 17:30:33.379750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:132248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.542 [2024-04-18 17:30:33.379775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.542 [2024-04-18 17:30:33.379802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:132256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.542 [2024-04-18 17:30:33.379827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.542 [2024-04-18 17:30:33.379854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:132264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.542 [2024-04-18 17:30:33.379878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.542 [2024-04-18 17:30:33.379905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:132272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.542 [2024-04-18 17:30:33.379928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.542 [2024-04-18 17:30:33.379957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:132280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.542 [2024-04-18 17:30:33.379980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.542 [2024-04-18 17:30:33.380009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:132288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:38.542 [2024-04-18 17:30:33.380039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.542 [2024-04-18 17:30:33.395835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:38.542 [2024-04-18 17:30:33.395860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:38.542 [2024-04-18 17:30:33.395881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:132296 len:8 PRP1 0x0 PRP2 0x0 00:15:38.542 [2024-04-18 17:30:33.395905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.542 [2024-04-18 17:30:33.399522] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:38.542 [2024-04-18 17:30:33.399917] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:15:38.542 [2024-04-18 17:30:33.399946] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:38.542 [2024-04-18 17:30:33.399966] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:15:38.542 [2024-04-18 17:30:33.400009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:38.542 [2024-04-18 17:30:33.400036] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:38.542 [2024-04-18 17:30:33.400068] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:15:38.542 [2024-04-18 17:30:33.400093] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:15:38.542 [2024-04-18 17:30:33.400118] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:15:38.542 [2024-04-18 17:30:33.400163] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:38.542 [2024-04-18 17:30:33.400190] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:38.542 [2024-04-18 17:30:35.406590] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:38.542 [2024-04-18 17:30:35.406641] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:15:38.542 [2024-04-18 17:30:35.406706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:38.542 [2024-04-18 17:30:35.406733] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:38.542 [2024-04-18 17:30:35.406860] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:15:38.542 [2024-04-18 17:30:35.406889] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:15:38.542 [2024-04-18 17:30:35.406913] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:15:38.542 [2024-04-18 17:30:35.406964] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:38.542 [2024-04-18 17:30:35.406990] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:38.542 [2024-04-18 17:30:37.412193] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:38.542 [2024-04-18 17:30:37.412246] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:15:38.542 [2024-04-18 17:30:37.412314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:38.542 [2024-04-18 17:30:37.412340] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:15:38.542 [2024-04-18 17:30:37.412393] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:15:38.542 [2024-04-18 17:30:37.412429] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:15:38.542 [2024-04-18 17:30:37.412453] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:15:38.542 [2024-04-18 17:30:37.412503] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:38.542 [2024-04-18 17:30:37.412530] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:15:38.542 [2024-04-18 17:30:37.901839] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:38.542 [2024-04-18 17:30:37.901885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.542 [2024-04-18 17:30:37.901912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32535 cdw0:16 sqhd:05dc p:0 m:0 dnr:0 00:15:38.542 [2024-04-18 17:30:37.901952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.542 [2024-04-18 17:30:37.901973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32535 cdw0:16 sqhd:05dc p:0 m:0 dnr:0 00:15:38.542 [2024-04-18 17:30:37.901995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.542 [2024-04-18 17:30:37.902016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32535 cdw0:16 sqhd:05dc p:0 m:0 dnr:0 00:15:38.542 [2024-04-18 17:30:37.902037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:38.542 [2024-04-18 17:30:37.902060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32535 cdw0:16 sqhd:05dc p:0 m:0 dnr:0 00:15:38.542 [2024-04-18 17:30:37.906968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:38.542 [2024-04-18 17:30:37.907004] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:38.542 [2024-04-18 17:30:37.907075] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:38.542 [2024-04-18 17:30:37.911841] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:37.921863] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:37.931890] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:37.941918] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:37.951944] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:37.961971] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:37.971998] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:37.982025] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:37.992051] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:38.002076] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:38.012101] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:38.022127] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:38.032153] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:38.042181] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:38.052207] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:38.062233] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:38.072259] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:38.082287] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:38.092313] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:38.102341] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.542 [2024-04-18 17:30:38.112368] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.122397] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.132422] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.142450] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.152477] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.162502] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.172530] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.182555] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.192583] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.202610] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.212636] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.222663] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.232691] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.242718] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.252745] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.262770] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.272796] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.282823] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.292849] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.302874] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.312902] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.322927] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.332952] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.342981] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.353009] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.363036] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.373062] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.383087] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.393115] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.403141] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.413167] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.423198] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.443245] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.453210] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.456808] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:38.543 [2024-04-18 17:30:38.463235] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.473260] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.483286] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.493313] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.503339] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.513387] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.523407] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.533432] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.543459] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.553485] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.563510] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.573537] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.583562] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.593588] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.603613] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.613641] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.623678] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.633705] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.643731] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.653755] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.663783] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.673811] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.683839] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.693863] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.703892] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.713918] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.723946] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.733973] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.744001] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.754029] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.764057] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.774087] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.784115] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.794140] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.804165] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.814191] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.824217] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.834252] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.844279] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.854304] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.864331] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.874357] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.884590] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.894615] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.904641] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:15:38.543 [2024-04-18 17:30:38.909567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:244736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fe000 len:0x1000 key:0x1c20ef 00:15:38.543 [2024-04-18 17:30:38.909598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.543 [2024-04-18 17:30:38.909641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:244744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fc000 len:0x1000 key:0x1c20ef 00:15:38.543 [2024-04-18 17:30:38.909669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.543 [2024-04-18 17:30:38.909713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:244752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fa000 len:0x1000 key:0x1c20ef 00:15:38.543 [2024-04-18 17:30:38.909737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.543 [2024-04-18 17:30:38.909765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:244760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f8000 len:0x1000 key:0x1c20ef 00:15:38.543 [2024-04-18 17:30:38.909791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.543 [2024-04-18 17:30:38.909819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:244768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f6000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.909844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.909871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:244776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f4000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.909896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.909923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:244784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f2000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.909947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.909973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:244792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f0000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.909998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:244800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ee000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:244808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ec000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:244816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ea000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:244824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e8000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:244832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e6000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:244840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e4000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:244848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e2000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:244856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e0000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:244864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079de000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:244872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079dc000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:244880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079da000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:244888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d8000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:244896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d6000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:244904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d4000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:244912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d2000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:244920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d0000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:244928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ce000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.910952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:244936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079cc000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.910974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.911001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:244944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ca000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.911026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.911054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:244952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c8000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.911078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.911106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:244960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c6000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.911129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.911157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:244968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c4000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.911179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.911207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:244976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c2000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.911232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.911258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:244984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c0000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.911296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.911323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:244992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079be000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.911348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.911374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:245000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079bc000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.911414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.911443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:245008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ba000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.911468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.911494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:245016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b8000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.911520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.911547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:245024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b6000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.911574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.911601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:245032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b4000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.911628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.911656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:245040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b2000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.911698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.911724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:245048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b0000 len:0x1000 key:0x1c20ef 00:15:38.544 [2024-04-18 17:30:38.911751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.544 [2024-04-18 17:30:38.911777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:245056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ae000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.911803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.911829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:245064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ac000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.911854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.911880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:245072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079aa000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.911906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.911932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:245080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a8000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.911958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.911984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:245088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a6000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:245096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a4000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:245104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a2000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:245112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a0000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:245120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799e000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:245128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799c000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:245136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799a000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:245144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007998000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:245152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007996000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:245160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007994000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:245168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007992000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:245176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007990000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:245184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798e000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:245192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798c000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:245200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798a000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:245208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007988000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:245216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007986000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:245224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007984000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.912975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:245232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007982000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.912999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.913029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:245240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007980000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.913053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.913080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:245248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797e000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.913104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.913132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:245256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797c000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.913156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.913184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:245264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797a000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.913208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.913237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:245272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007978000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.913268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.913297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:245280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007976000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.913321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.913350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:245288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007974000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.913399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.913431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:245296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007972000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.913455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.913483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:245304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007970000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.913508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.913536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:245312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796e000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.913562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.913590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:245320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796c000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.913615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.913644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:245328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796a000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.913669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.913698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:245336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007968000 len:0x1000 key:0x1c20ef 00:15:38.545 [2024-04-18 17:30:38.913723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.545 [2024-04-18 17:30:38.913752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:245344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007966000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.913776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.913806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:245352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007964000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.913830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.913859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:245360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007962000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.913902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.913930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:245368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007960000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.913969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.913998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:245376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795e000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:245384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795c000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:245392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000795a000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:245400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007958000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:245408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007956000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:245416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007954000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:245424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007952000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:245432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007950000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:245440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794e000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:245448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794c000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:245456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000794a000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:245464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007948000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:245472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007946000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:245480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007944000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:245488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007942000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:245496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007940000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:245504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793e000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.914964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:245512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793c000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.914986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.915013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:245520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000793a000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.915052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.915080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:245528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007938000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.915104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.915130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:245536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007936000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.915156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.915188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:245544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007934000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.915214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.915241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:245552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007932000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.915268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.915296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:245560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007930000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.915322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.915348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:245568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792e000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.915374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.915409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:245576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792c000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.915435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.915462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:245584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792a000 len:0x1000 key:0x1c20ef 00:15:38.546 [2024-04-18 17:30:38.915487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.546 [2024-04-18 17:30:38.915513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:245592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007928000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.915539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.915565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:245600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007926000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.915590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.915616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:245608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007924000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.915640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.915669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:245616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007922000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.915693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.915722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:245624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007920000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.915746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.915780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:245632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791e000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.915805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.915833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:245640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791c000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.915856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.915885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:245648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791a000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.915909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.915938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:245656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007918000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.915961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.915989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:245664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007916000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.916013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.916042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:245672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007914000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.916065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.916094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:245680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007912000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.916119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.916147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:245688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007910000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.916185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.916213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:245696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790e000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.916250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.916279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:245704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.916302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.916330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:245712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.916354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.916399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:245720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.916425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.916456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:245728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.916480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.916508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:245736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.916532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.916560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:245744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1c20ef 00:15:38.547 [2024-04-18 17:30:38.916585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32535 cdw0:ed2cc880 sqhd:6b93 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.932907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:38.547 [2024-04-18 17:30:38.932931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:38.547 [2024-04-18 17:30:38.932952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:245752 len:8 PRP1 0x0 PRP2 0x0 00:15:38.547 [2024-04-18 17:30:38.932976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:38.547 [2024-04-18 17:30:38.933063] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:38.547 [2024-04-18 17:30:38.933416] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:15:38.547 [2024-04-18 17:30:38.933443] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:38.547 [2024-04-18 17:30:38.933464] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:15:38.547 [2024-04-18 17:30:38.933506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:38.547 [2024-04-18 17:30:38.933534] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:38.547 [2024-04-18 17:30:38.933565] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:15:38.547 [2024-04-18 17:30:38.933591] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:15:38.547 [2024-04-18 17:30:38.933614] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:15:38.547 [2024-04-18 17:30:38.933654] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:38.547 [2024-04-18 17:30:38.933680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:38.547 [2024-04-18 17:30:40.939251] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:38.547 [2024-04-18 17:30:40.939309] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:15:38.547 [2024-04-18 17:30:40.939361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:38.547 [2024-04-18 17:30:40.939413] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:38.547 [2024-04-18 17:30:40.939447] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:15:38.547 [2024-04-18 17:30:40.939482] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:15:38.547 [2024-04-18 17:30:40.939505] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:15:38.547 [2024-04-18 17:30:40.939557] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:38.547 [2024-04-18 17:30:40.939585] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:38.547 [2024-04-18 17:30:42.945095] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:15:38.547 [2024-04-18 17:30:42.945160] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:15:38.547 [2024-04-18 17:30:42.945218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:38.547 [2024-04-18 17:30:42.945245] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:15:38.547 [2024-04-18 17:30:42.945280] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:15:38.547 [2024-04-18 17:30:42.945305] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:15:38.547 [2024-04-18 17:30:42.945328] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:15:38.547 [2024-04-18 17:30:42.945401] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:38.547 [2024-04-18 17:30:42.945431] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:15:38.547 [2024-04-18 17:30:44.002952] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:38.547 00:15:38.547 Latency(us) 00:15:38.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.547 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:38.547 Verification LBA range: start 0x0 length 0x8000 00:15:38.547 Nvme_mlx_0_0n1 : 90.00 9517.02 37.18 0.00 0.00 13425.64 2791.35 7058858.29 00:15:38.547 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:38.547 Verification LBA range: start 0x0 length 0x8000 00:15:38.547 Nvme_mlx_0_1n1 : 90.01 9119.22 35.62 0.00 0.00 14010.75 2949.12 7058858.29 00:15:38.547 =================================================================================================================== 00:15:38.547 Total : 18636.24 72.80 0.00 0.00 13711.97 2791.35 7058858.29 00:15:38.547 Received shutdown signal, test time was about 90.000000 seconds 00:15:38.547 00:15:38.548 Latency(us) 00:15:38.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.548 =================================================================================================================== 00:15:38.548 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:38.548 17:31:57 -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:15:38.548 17:31:57 -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:15:38.548 17:31:57 -- target/device_removal.sh@202 -- # killprocess 2008991 00:15:38.548 17:31:57 -- common/autotest_common.sh@936 -- # '[' -z 2008991 ']' 00:15:38.548 17:31:57 -- common/autotest_common.sh@940 -- # kill -0 2008991 00:15:38.548 17:31:57 -- common/autotest_common.sh@941 -- # uname 00:15:38.548 17:31:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:38.548 17:31:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2008991 00:15:38.548 17:31:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:38.548 17:31:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:38.548 17:31:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2008991' 00:15:38.548 killing process with pid 2008991 00:15:38.548 17:31:58 -- common/autotest_common.sh@955 -- # kill 2008991 00:15:38.548 17:31:58 -- common/autotest_common.sh@960 -- # wait 2008991 00:15:38.548 17:31:58 -- target/device_removal.sh@203 -- # nvmfpid= 00:15:38.548 17:31:58 -- target/device_removal.sh@205 -- # return 0 00:15:38.548 00:15:38.548 real 1m32.352s 00:15:38.548 user 4m29.317s 00:15:38.548 sys 0m2.586s 00:15:38.548 17:31:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:38.548 17:31:58 -- common/autotest_common.sh@10 -- # set +x 00:15:38.548 ************************************ 00:15:38.548 END TEST nvmf_device_removal_pci_remove 00:15:38.548 ************************************ 00:15:38.548 17:31:58 -- target/device_removal.sh@317 -- # nvmftestfini 00:15:38.548 17:31:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:38.548 17:31:58 -- nvmf/common.sh@117 -- # sync 00:15:38.548 17:31:58 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:38.548 17:31:58 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:38.548 17:31:58 -- nvmf/common.sh@120 -- # set +e 00:15:38.548 17:31:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.548 17:31:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:38.548 rmmod nvme_rdma 00:15:38.548 rmmod nvme_fabrics 00:15:38.548 17:31:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.548 17:31:58 -- nvmf/common.sh@124 -- # set -e 00:15:38.548 17:31:58 -- nvmf/common.sh@125 -- # return 0 00:15:38.548 17:31:58 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:38.548 17:31:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:38.548 17:31:58 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:15:38.548 17:31:58 -- target/device_removal.sh@318 -- # clean_bond_device 00:15:38.548 17:31:58 -- target/device_removal.sh@240 -- # ip link 00:15:38.548 17:31:58 -- target/device_removal.sh@240 -- # grep bond_nvmf 00:15:38.548 00:15:38.548 real 3m7.168s 00:15:38.548 user 8m59.449s 00:15:38.548 sys 0m6.714s 00:15:38.548 17:31:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:38.548 17:31:58 -- common/autotest_common.sh@10 -- # set +x 00:15:38.548 ************************************ 00:15:38.548 END TEST nvmf_device_removal 00:15:38.548 ************************************ 00:15:38.548 17:31:58 -- nvmf/nvmf.sh@79 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:15:38.548 17:31:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:38.548 17:31:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:38.548 17:31:58 -- common/autotest_common.sh@10 -- # set +x 00:15:38.548 ************************************ 00:15:38.548 START TEST nvmf_srq_overwhelm 00:15:38.548 ************************************ 00:15:38.548 17:31:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:15:38.548 * Looking for test storage... 00:15:38.548 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:38.548 17:31:58 -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.548 17:31:58 -- nvmf/common.sh@7 -- # uname -s 00:15:38.548 17:31:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.548 17:31:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.548 17:31:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.548 17:31:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.548 17:31:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.548 17:31:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.548 17:31:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.548 17:31:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.548 17:31:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.548 17:31:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.548 17:31:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.548 17:31:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.548 17:31:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.548 17:31:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.548 17:31:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.548 17:31:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.548 17:31:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:38.548 17:31:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.548 17:31:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.548 17:31:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.548 17:31:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.548 17:31:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.548 17:31:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.548 17:31:58 -- paths/export.sh@5 -- # export PATH 00:15:38.548 17:31:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.548 17:31:58 -- nvmf/common.sh@47 -- # : 0 00:15:38.548 17:31:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:38.548 17:31:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:38.548 17:31:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.548 17:31:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.548 17:31:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.548 17:31:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:38.548 17:31:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:38.548 17:31:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:38.548 17:31:58 -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:38.548 17:31:58 -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:38.548 17:31:58 -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:15:38.548 17:31:58 -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:15:38.548 17:31:58 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:15:38.548 17:31:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.548 17:31:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:38.548 17:31:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:38.548 17:31:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:38.548 17:31:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.548 17:31:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.548 17:31:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.548 17:31:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:38.548 17:31:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:38.548 17:31:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:38.548 17:31:58 -- common/autotest_common.sh@10 -- # set +x 00:15:38.548 17:32:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:38.548 17:32:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:38.548 17:32:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:38.548 17:32:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:38.548 17:32:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:38.548 17:32:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:38.548 17:32:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:38.548 17:32:00 -- nvmf/common.sh@295 -- # net_devs=() 00:15:38.548 17:32:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:38.548 17:32:00 -- nvmf/common.sh@296 -- # e810=() 00:15:38.548 17:32:00 -- nvmf/common.sh@296 -- # local -ga e810 00:15:38.548 17:32:00 -- nvmf/common.sh@297 -- # x722=() 00:15:38.548 17:32:00 -- nvmf/common.sh@297 -- # local -ga x722 00:15:38.548 17:32:00 -- nvmf/common.sh@298 -- # mlx=() 00:15:38.548 17:32:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:38.548 17:32:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.548 17:32:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.548 17:32:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.548 17:32:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.548 17:32:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.548 17:32:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.548 17:32:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.548 17:32:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.549 17:32:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.549 17:32:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.549 17:32:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.549 17:32:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:38.549 17:32:00 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:38.549 17:32:00 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:38.549 17:32:00 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:38.549 17:32:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:38.549 17:32:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.549 17:32:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:15:38.549 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:15:38.549 17:32:00 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:38.549 17:32:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.549 17:32:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:15:38.549 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:15:38.549 17:32:00 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:38.549 17:32:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:38.549 17:32:00 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.549 17:32:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.549 17:32:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:38.549 17:32:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.549 17:32:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:15:38.549 Found net devices under 0000:09:00.0: mlx_0_0 00:15:38.549 17:32:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.549 17:32:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.549 17:32:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.549 17:32:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:38.549 17:32:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.549 17:32:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:15:38.549 Found net devices under 0000:09:00.1: mlx_0_1 00:15:38.549 17:32:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.549 17:32:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:38.549 17:32:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:38.549 17:32:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@409 -- # rdma_device_init 00:15:38.549 17:32:00 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:15:38.549 17:32:00 -- nvmf/common.sh@58 -- # uname 00:15:38.549 17:32:00 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:38.549 17:32:00 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:38.549 17:32:00 -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:38.549 17:32:00 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:38.549 17:32:00 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:38.549 17:32:00 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:38.549 17:32:00 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:38.549 17:32:00 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:38.549 17:32:00 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:15:38.549 17:32:00 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:38.549 17:32:00 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:38.549 17:32:00 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:38.549 17:32:00 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:38.549 17:32:00 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:38.549 17:32:00 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:38.549 17:32:00 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:38.549 17:32:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:38.549 17:32:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.549 17:32:00 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:38.549 17:32:00 -- nvmf/common.sh@105 -- # continue 2 00:15:38.549 17:32:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:38.549 17:32:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.549 17:32:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.549 17:32:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:38.549 17:32:00 -- nvmf/common.sh@105 -- # continue 2 00:15:38.549 17:32:00 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:38.549 17:32:00 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:38.549 17:32:00 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:38.549 17:32:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:38.549 17:32:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:38.549 17:32:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:38.549 17:32:00 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:38.549 17:32:00 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:38.549 82: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:38.549 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:15:38.549 altname enp9s0f0np0 00:15:38.549 inet 192.168.100.8/24 scope global mlx_0_0 00:15:38.549 valid_lft forever preferred_lft forever 00:15:38.549 17:32:00 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:38.549 17:32:00 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:38.549 17:32:00 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:38.549 17:32:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:38.549 17:32:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:38.549 17:32:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:38.549 17:32:00 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:38.549 17:32:00 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:38.549 83: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:38.549 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:15:38.549 altname enp9s0f1np1 00:15:38.549 inet 192.168.100.9/24 scope global mlx_0_1 00:15:38.549 valid_lft forever preferred_lft forever 00:15:38.549 17:32:00 -- nvmf/common.sh@411 -- # return 0 00:15:38.549 17:32:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:38.549 17:32:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:38.549 17:32:00 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:15:38.549 17:32:00 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:38.549 17:32:00 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:38.549 17:32:00 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:38.549 17:32:00 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:38.549 17:32:00 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:38.549 17:32:00 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:38.549 17:32:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:38.549 17:32:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.549 17:32:00 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:38.549 17:32:00 -- nvmf/common.sh@105 -- # continue 2 00:15:38.549 17:32:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:38.549 17:32:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.549 17:32:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.549 17:32:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:38.549 17:32:00 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:38.549 17:32:00 -- nvmf/common.sh@105 -- # continue 2 00:15:38.549 17:32:00 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:38.549 17:32:00 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:38.549 17:32:00 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:38.549 17:32:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:38.549 17:32:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:38.549 17:32:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:38.549 17:32:00 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:38.549 17:32:00 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:38.549 17:32:00 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:38.549 17:32:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:38.549 17:32:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:38.549 17:32:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:38.549 17:32:00 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:15:38.549 192.168.100.9' 00:15:38.549 17:32:00 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:38.549 192.168.100.9' 00:15:38.549 17:32:00 -- nvmf/common.sh@446 -- # head -n 1 00:15:38.549 17:32:00 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:38.549 17:32:00 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:15:38.549 192.168.100.9' 00:15:38.549 17:32:00 -- nvmf/common.sh@447 -- # tail -n +2 00:15:38.549 17:32:00 -- nvmf/common.sh@447 -- # head -n 1 00:15:38.549 17:32:00 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:38.549 17:32:00 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:15:38.549 17:32:00 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:38.549 17:32:00 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:15:38.549 17:32:00 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:15:38.549 17:32:00 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:15:38.549 17:32:00 -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:15:38.549 17:32:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:38.549 17:32:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:38.549 17:32:00 -- common/autotest_common.sh@10 -- # set +x 00:15:38.549 17:32:00 -- nvmf/common.sh@470 -- # nvmfpid=2021699 00:15:38.549 17:32:00 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:38.549 17:32:00 -- nvmf/common.sh@471 -- # waitforlisten 2021699 00:15:38.550 17:32:00 -- common/autotest_common.sh@817 -- # '[' -z 2021699 ']' 00:15:38.550 17:32:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.550 17:32:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:38.550 17:32:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.550 17:32:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:38.550 17:32:00 -- common/autotest_common.sh@10 -- # set +x 00:15:38.550 [2024-04-18 17:32:00.692826] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:15:38.550 [2024-04-18 17:32:00.692910] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.550 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.550 [2024-04-18 17:32:00.757150] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.550 [2024-04-18 17:32:00.872871] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.550 [2024-04-18 17:32:00.872937] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.550 [2024-04-18 17:32:00.872963] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.550 [2024-04-18 17:32:00.872985] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.550 [2024-04-18 17:32:00.873002] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.550 [2024-04-18 17:32:00.873095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.550 [2024-04-18 17:32:00.873152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.550 [2024-04-18 17:32:00.873267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.550 [2024-04-18 17:32:00.873275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.550 17:32:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:38.550 17:32:00 -- common/autotest_common.sh@850 -- # return 0 00:15:38.550 17:32:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:38.550 17:32:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:38.550 17:32:00 -- common/autotest_common.sh@10 -- # set +x 00:15:38.550 17:32:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.550 17:32:01 -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:15:38.550 17:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.550 17:32:01 -- common/autotest_common.sh@10 -- # set +x 00:15:38.550 [2024-04-18 17:32:01.051935] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfd9b60/0xfde050) succeed. 00:15:38.550 [2024-04-18 17:32:01.062939] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfdb150/0x101f6e0) succeed. 00:15:38.550 17:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.550 17:32:01 -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:15:38.550 17:32:01 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:38.550 17:32:01 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:15:38.550 17:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.550 17:32:01 -- common/autotest_common.sh@10 -- # set +x 00:15:38.550 17:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.550 17:32:01 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:38.550 17:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.550 17:32:01 -- common/autotest_common.sh@10 -- # set +x 00:15:38.550 Malloc0 00:15:38.550 17:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.550 17:32:01 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:15:38.550 17:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.550 17:32:01 -- common/autotest_common.sh@10 -- # set +x 00:15:38.550 17:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.550 17:32:01 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:38.550 17:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.550 17:32:01 -- common/autotest_common.sh@10 -- # set +x 00:15:38.550 [2024-04-18 17:32:01.163220] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:38.550 17:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.550 17:32:01 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:15:38.550 17:32:04 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:15:38.550 17:32:04 -- common/autotest_common.sh@1221 -- # local i=0 00:15:38.550 17:32:04 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:15:38.550 17:32:04 -- common/autotest_common.sh@1222 -- # grep -q -w nvme0n1 00:15:38.550 17:32:04 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:15:38.550 17:32:04 -- common/autotest_common.sh@1228 -- # grep -q -w nvme0n1 00:15:38.550 17:32:04 -- common/autotest_common.sh@1232 -- # return 0 00:15:38.550 17:32:04 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:38.550 17:32:04 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:38.550 17:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.550 17:32:04 -- common/autotest_common.sh@10 -- # set +x 00:15:38.550 17:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.550 17:32:04 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:38.550 17:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.550 17:32:04 -- common/autotest_common.sh@10 -- # set +x 00:15:38.550 Malloc1 00:15:38.550 17:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.550 17:32:04 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:38.550 17:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.550 17:32:04 -- common/autotest_common.sh@10 -- # set +x 00:15:38.550 17:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.550 17:32:04 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:38.550 17:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:38.550 17:32:04 -- common/autotest_common.sh@10 -- # set +x 00:15:38.550 17:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:38.550 17:32:04 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:42.768 17:32:08 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:15:42.768 17:32:08 -- common/autotest_common.sh@1221 -- # local i=0 00:15:42.768 17:32:08 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:15:42.768 17:32:08 -- common/autotest_common.sh@1222 -- # grep -q -w nvme1n1 00:15:42.768 17:32:08 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:15:42.768 17:32:08 -- common/autotest_common.sh@1228 -- # grep -q -w nvme1n1 00:15:42.768 17:32:08 -- common/autotest_common.sh@1232 -- # return 0 00:15:42.768 17:32:08 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:42.768 17:32:08 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:42.768 17:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.768 17:32:08 -- common/autotest_common.sh@10 -- # set +x 00:15:42.768 17:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.768 17:32:08 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:15:42.768 17:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.768 17:32:08 -- common/autotest_common.sh@10 -- # set +x 00:15:42.768 Malloc2 00:15:42.768 17:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.768 17:32:08 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:15:42.768 17:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.768 17:32:08 -- common/autotest_common.sh@10 -- # set +x 00:15:42.768 17:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.768 17:32:08 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:15:42.768 17:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.768 17:32:08 -- common/autotest_common.sh@10 -- # set +x 00:15:42.768 17:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.768 17:32:08 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:15:46.057 17:32:12 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:15:46.057 17:32:12 -- common/autotest_common.sh@1221 -- # local i=0 00:15:46.057 17:32:12 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:15:46.057 17:32:12 -- common/autotest_common.sh@1222 -- # grep -q -w nvme2n1 00:15:46.057 17:32:12 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:15:46.057 17:32:12 -- common/autotest_common.sh@1228 -- # grep -q -w nvme2n1 00:15:46.057 17:32:12 -- common/autotest_common.sh@1232 -- # return 0 00:15:46.057 17:32:12 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:46.057 17:32:12 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:46.057 17:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:46.057 17:32:12 -- common/autotest_common.sh@10 -- # set +x 00:15:46.057 17:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:46.057 17:32:12 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:15:46.057 17:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:46.057 17:32:12 -- common/autotest_common.sh@10 -- # set +x 00:15:46.057 Malloc3 00:15:46.057 17:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:46.057 17:32:12 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:15:46.057 17:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:46.057 17:32:12 -- common/autotest_common.sh@10 -- # set +x 00:15:46.057 17:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:46.057 17:32:12 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:15:46.057 17:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:46.057 17:32:12 -- common/autotest_common.sh@10 -- # set +x 00:15:46.057 17:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:46.057 17:32:12 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:15:49.390 17:32:15 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:15:49.390 17:32:15 -- common/autotest_common.sh@1221 -- # local i=0 00:15:49.390 17:32:15 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:15:49.390 17:32:15 -- common/autotest_common.sh@1222 -- # grep -q -w nvme3n1 00:15:49.390 17:32:15 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:15:49.390 17:32:15 -- common/autotest_common.sh@1228 -- # grep -q -w nvme3n1 00:15:49.390 17:32:15 -- common/autotest_common.sh@1232 -- # return 0 00:15:49.390 17:32:15 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:49.390 17:32:15 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:49.390 17:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.390 17:32:15 -- common/autotest_common.sh@10 -- # set +x 00:15:49.390 17:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.390 17:32:15 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:15:49.390 17:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.390 17:32:15 -- common/autotest_common.sh@10 -- # set +x 00:15:49.390 Malloc4 00:15:49.390 17:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.390 17:32:15 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:15:49.390 17:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.390 17:32:15 -- common/autotest_common.sh@10 -- # set +x 00:15:49.390 17:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.390 17:32:15 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:15:49.390 17:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.390 17:32:15 -- common/autotest_common.sh@10 -- # set +x 00:15:49.390 17:32:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.390 17:32:15 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:15:52.683 17:32:18 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:15:52.683 17:32:18 -- common/autotest_common.sh@1221 -- # local i=0 00:15:52.683 17:32:18 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:15:52.683 17:32:18 -- common/autotest_common.sh@1222 -- # grep -q -w nvme4n1 00:15:52.683 17:32:18 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:15:52.683 17:32:18 -- common/autotest_common.sh@1228 -- # grep -q -w nvme4n1 00:15:52.683 17:32:18 -- common/autotest_common.sh@1232 -- # return 0 00:15:52.683 17:32:18 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:15:52.683 17:32:18 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:15:52.683 17:32:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.683 17:32:18 -- common/autotest_common.sh@10 -- # set +x 00:15:52.683 17:32:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.683 17:32:18 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:15:52.683 17:32:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.683 17:32:18 -- common/autotest_common.sh@10 -- # set +x 00:15:52.683 Malloc5 00:15:52.683 17:32:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.683 17:32:18 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:15:52.683 17:32:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.683 17:32:18 -- common/autotest_common.sh@10 -- # set +x 00:15:52.683 17:32:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.683 17:32:18 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:15:52.683 17:32:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.683 17:32:18 -- common/autotest_common.sh@10 -- # set +x 00:15:52.683 17:32:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.683 17:32:18 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:15:55.968 17:32:22 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:15:55.968 17:32:22 -- common/autotest_common.sh@1221 -- # local i=0 00:15:55.968 17:32:22 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:15:55.968 17:32:22 -- common/autotest_common.sh@1222 -- # grep -q -w nvme5n1 00:15:55.968 17:32:22 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:15:55.968 17:32:22 -- common/autotest_common.sh@1228 -- # grep -q -w nvme5n1 00:15:55.968 17:32:22 -- common/autotest_common.sh@1232 -- # return 0 00:15:55.968 17:32:22 -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:15:55.968 [global] 00:15:55.968 thread=1 00:15:55.968 invalidate=1 00:15:55.968 rw=read 00:15:55.968 time_based=1 00:15:55.968 runtime=10 00:15:55.968 ioengine=libaio 00:15:55.968 direct=1 00:15:55.968 bs=1048576 00:15:55.968 iodepth=128 00:15:55.968 norandommap=1 00:15:55.968 numjobs=13 00:15:55.968 00:15:55.968 [job0] 00:15:55.968 filename=/dev/nvme0n1 00:15:55.968 [job1] 00:15:55.968 filename=/dev/nvme1n1 00:15:55.968 [job2] 00:15:55.968 filename=/dev/nvme2n1 00:15:55.968 [job3] 00:15:55.968 filename=/dev/nvme3n1 00:15:55.968 [job4] 00:15:55.968 filename=/dev/nvme4n1 00:15:55.968 [job5] 00:15:55.968 filename=/dev/nvme5n1 00:15:56.225 Could not set queue depth (nvme0n1) 00:15:56.225 Could not set queue depth (nvme1n1) 00:15:56.225 Could not set queue depth (nvme2n1) 00:15:56.225 Could not set queue depth (nvme3n1) 00:15:56.225 Could not set queue depth (nvme4n1) 00:15:56.225 Could not set queue depth (nvme5n1) 00:15:56.225 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:56.225 ... 00:15:56.225 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:56.225 ... 00:15:56.225 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:56.225 ... 00:15:56.225 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:56.225 ... 00:15:56.225 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:56.225 ... 00:15:56.225 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:56.225 ... 00:15:56.225 fio-3.35 00:15:56.225 Starting 78 threads 00:16:11.109 00:16:11.109 job0: (groupid=0, jobs=1): err= 0: pid=2024586: Thu Apr 18 17:32:35 2024 00:16:11.109 read: IOPS=1, BW=1735KiB/s (1776kB/s)(18.0MiB/10625msec) 00:16:11.109 slat (usec): min=481, max=6396.3k, avg=588337.51, stdev=1582021.95 00:16:11.109 clat (msec): min=34, max=10624, avg=8655.62, stdev=2635.60 00:16:11.109 lat (msec): min=6430, max=10624, avg=9243.96, stdev=1560.80 00:16:11.109 clat percentiles (msec): 00:16:11.109 | 1.00th=[ 35], 5.00th=[ 35], 10.00th=[ 6409], 20.00th=[ 6477], 00:16:11.109 | 30.00th=[ 8658], 40.00th=[ 8658], 50.00th=[ 8658], 60.00th=[10537], 00:16:11.109 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:16:11.109 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:11.109 | 99.99th=[10671] 00:16:11.109 lat (msec) : 50=5.56%, >=2000=94.44% 00:16:11.109 cpu : usr=0.00%, sys=0.10%, ctx=26, majf=0, minf=4609 00:16:11.109 IO depths : 1=5.6%, 2=11.1%, 4=22.2%, 8=44.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:16:11.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.109 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:11.109 issued rwts: total=18,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.109 job0: (groupid=0, jobs=1): err= 0: pid=2024587: Thu Apr 18 17:32:35 2024 00:16:11.109 read: IOPS=9, BW=9736KiB/s (9970kB/s)(101MiB/10623msec) 00:16:11.109 slat (usec): min=490, max=2107.0k, avg=104816.89, stdev=441887.56 00:16:11.109 clat (msec): min=35, max=10621, avg=7942.11, stdev=3356.26 00:16:11.109 lat (msec): min=2102, max=10622, avg=8046.92, stdev=3271.10 00:16:11.109 clat percentiles (msec): 00:16:11.109 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 2106], 20.00th=[ 4212], 00:16:11.109 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10537], 60.00th=[10537], 00:16:11.109 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10671], 95.00th=[10671], 00:16:11.109 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:11.109 | 99.99th=[10671] 00:16:11.109 lat (msec) : 50=0.99%, >=2000=99.01% 00:16:11.109 cpu : usr=0.00%, sys=0.76%, ctx=100, majf=0, minf=25857 00:16:11.109 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=7.9%, 16=15.8%, 32=31.7%, >=64=37.6% 00:16:11.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.109 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:11.109 issued rwts: total=101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.109 job0: (groupid=0, jobs=1): err= 0: pid=2024588: Thu Apr 18 17:32:35 2024 00:16:11.109 read: IOPS=5, BW=5702KiB/s (5839kB/s)(59.0MiB/10596msec) 00:16:11.109 slat (usec): min=477, max=4149.4k, avg=179000.66, stdev=691240.68 00:16:11.109 clat (msec): min=33, max=10593, avg=7358.37, stdev=2602.20 00:16:11.109 lat (msec): min=4183, max=10595, avg=7537.38, stdev=2448.38 00:16:11.109 clat percentiles (msec): 00:16:11.109 | 1.00th=[ 34], 5.00th=[ 4279], 10.00th=[ 4279], 20.00th=[ 4329], 00:16:11.109 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[ 8658], 00:16:11.109 | 70.00th=[ 8658], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:16:11.109 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:16:11.109 | 99.99th=[10537] 00:16:11.109 lat (msec) : 50=1.69%, >=2000=98.31% 00:16:11.109 cpu : usr=0.01%, sys=0.35%, ctx=46, majf=0, minf=15105 00:16:11.109 IO depths : 1=1.7%, 2=3.4%, 4=6.8%, 8=13.6%, 16=27.1%, 32=47.5%, >=64=0.0% 00:16:11.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.109 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:11.109 issued rwts: total=59,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.109 job0: (groupid=0, jobs=1): err= 0: pid=2024589: Thu Apr 18 17:32:35 2024 00:16:11.109 read: IOPS=3, BW=3191KiB/s (3267kB/s)(33.0MiB/10591msec) 00:16:11.109 slat (usec): min=474, max=4190.2k, avg=318167.59, stdev=906943.73 00:16:11.109 clat (msec): min=91, max=10588, avg=8897.92, stdev=2465.15 00:16:11.109 lat (msec): min=4281, max=10590, avg=9216.09, stdev=1907.45 00:16:11.109 clat percentiles (msec): 00:16:11.109 | 1.00th=[ 91], 5.00th=[ 4279], 10.00th=[ 6409], 20.00th=[ 6477], 00:16:11.109 | 30.00th=[ 8658], 40.00th=[ 8658], 50.00th=[10402], 60.00th=[10537], 00:16:11.109 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:16:11.109 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:16:11.109 | 99.99th=[10537] 00:16:11.109 lat (msec) : 100=3.03%, >=2000=96.97% 00:16:11.109 cpu : usr=0.01%, sys=0.16%, ctx=46, majf=0, minf=8449 00:16:11.109 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:16:11.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.109 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:11.109 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.109 job0: (groupid=0, jobs=1): err= 0: pid=2024590: Thu Apr 18 17:32:35 2024 00:16:11.109 read: IOPS=3, BW=3369KiB/s (3450kB/s)(35.0MiB/10638msec) 00:16:11.109 slat (usec): min=530, max=4285.2k, avg=303497.16, stdev=907290.02 00:16:11.109 clat (msec): min=15, max=10636, avg=9688.31, stdev=2295.73 00:16:11.109 lat (msec): min=4300, max=10637, avg=9991.81, stdev=1565.23 00:16:11.109 clat percentiles (msec): 00:16:11.109 | 1.00th=[ 16], 5.00th=[ 4329], 10.00th=[ 6409], 20.00th=[10537], 00:16:11.109 | 30.00th=[10537], 40.00th=[10537], 50.00th=[10537], 60.00th=[10537], 00:16:11.109 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:16:11.109 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:11.109 | 99.99th=[10671] 00:16:11.109 lat (msec) : 20=2.86%, >=2000=97.14% 00:16:11.109 cpu : usr=0.00%, sys=0.29%, ctx=50, majf=0, minf=8961 00:16:11.109 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:16:11.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.109 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:11.109 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.109 job0: (groupid=0, jobs=1): err= 0: pid=2024591: Thu Apr 18 17:32:35 2024 00:16:11.109 read: IOPS=1, BW=1451KiB/s (1486kB/s)(15.0MiB/10587msec) 00:16:11.109 slat (usec): min=471, max=4206.0k, avg=701647.60, stdev=1276643.05 00:16:11.109 clat (msec): min=61, max=10473, avg=7132.30, stdev=2595.19 00:16:11.109 lat (msec): min=4267, max=10586, avg=7833.95, stdev=1867.91 00:16:11.109 clat percentiles (msec): 00:16:11.109 | 1.00th=[ 62], 5.00th=[ 62], 10.00th=[ 4279], 20.00th=[ 6409], 00:16:11.109 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 8557], 00:16:11.109 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[10537], 95.00th=[10537], 00:16:11.109 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:16:11.109 | 99.99th=[10537] 00:16:11.109 lat (msec) : 100=6.67%, >=2000=93.33% 00:16:11.109 cpu : usr=0.00%, sys=0.09%, ctx=32, majf=0, minf=3841 00:16:11.109 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.109 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.109 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.109 job0: (groupid=0, jobs=1): err= 0: pid=2024592: Thu Apr 18 17:32:35 2024 00:16:11.109 read: IOPS=1, BW=1061KiB/s (1087kB/s)(11.0MiB/10614msec) 00:16:11.109 slat (usec): min=829, max=6408.5k, avg=962833.31, stdev=1987501.85 00:16:11.109 clat (msec): min=21, max=10611, avg=8714.44, stdev=3199.14 00:16:11.109 lat (msec): min=6430, max=10613, avg=9677.28, stdev=1420.95 00:16:11.109 clat percentiles (msec): 00:16:11.109 | 1.00th=[ 22], 5.00th=[ 22], 10.00th=[ 6409], 20.00th=[ 8557], 00:16:11.109 | 30.00th=[ 8557], 40.00th=[ 8658], 50.00th=[10671], 60.00th=[10671], 00:16:11.109 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:16:11.109 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:11.109 | 99.99th=[10671] 00:16:11.109 lat (msec) : 50=9.09%, >=2000=90.91% 00:16:11.109 cpu : usr=0.00%, sys=0.08%, ctx=21, majf=0, minf=2817 00:16:11.109 IO depths : 1=9.1%, 2=18.2%, 4=36.4%, 8=36.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.109 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.109 issued rwts: total=11,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.109 job0: (groupid=0, jobs=1): err= 0: pid=2024593: Thu Apr 18 17:32:35 2024 00:16:11.109 read: IOPS=1, BW=1934KiB/s (1980kB/s)(20.0MiB/10591msec) 00:16:11.109 slat (usec): min=854, max=6408.5k, avg=528766.21, stdev=1509895.91 00:16:11.109 clat (msec): min=14, max=10573, avg=9303.30, stdev=2471.52 00:16:11.109 lat (msec): min=6423, max=10590, avg=9832.07, stdev=1166.43 00:16:11.109 clat percentiles (msec): 00:16:11.109 | 1.00th=[ 16], 5.00th=[ 16], 10.00th=[ 6409], 20.00th=[ 8557], 00:16:11.109 | 30.00th=[ 8658], 40.00th=[10402], 50.00th=[10402], 60.00th=[10402], 00:16:11.109 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:16:11.109 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:16:11.109 | 99.99th=[10537] 00:16:11.109 lat (msec) : 20=5.00%, >=2000=95.00% 00:16:11.110 cpu : usr=0.00%, sys=0.14%, ctx=40, majf=0, minf=5121 00:16:11.110 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:16:11.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.110 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:11.110 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.110 job0: (groupid=0, jobs=1): err= 0: pid=2024594: Thu Apr 18 17:32:35 2024 00:16:11.110 read: IOPS=6, BW=6910KiB/s (7076kB/s)(72.0MiB/10669msec) 00:16:11.110 slat (usec): min=449, max=6372.6k, avg=147564.17, stdev=810646.49 00:16:11.110 clat (msec): min=43, max=10667, avg=9903.85, stdev=1714.20 00:16:11.110 lat (msec): min=6416, max=10668, avg=10051.42, stdev=1247.15 00:16:11.110 clat percentiles (msec): 00:16:11.110 | 1.00th=[ 44], 5.00th=[ 6409], 10.00th=[ 8557], 20.00th=[10402], 00:16:11.110 | 30.00th=[10402], 40.00th=[10537], 50.00th=[10537], 60.00th=[10671], 00:16:11.110 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:16:11.110 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:11.110 | 99.99th=[10671] 00:16:11.110 lat (msec) : 50=1.39%, >=2000=98.61% 00:16:11.110 cpu : usr=0.00%, sys=0.52%, ctx=75, majf=0, minf=18433 00:16:11.110 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:16:11.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.110 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:11.110 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.110 job0: (groupid=0, jobs=1): err= 0: pid=2024595: Thu Apr 18 17:32:35 2024 00:16:11.110 read: IOPS=1, BW=1832KiB/s (1876kB/s)(19.0MiB/10618msec) 00:16:11.110 slat (usec): min=544, max=6368.7k, avg=555589.94, stdev=1546979.18 00:16:11.110 clat (msec): min=60, max=10616, avg=9084.47, stdev=2696.45 00:16:11.110 lat (msec): min=6429, max=10617, avg=9640.06, stdev=1597.47 00:16:11.110 clat percentiles (msec): 00:16:11.110 | 1.00th=[ 62], 5.00th=[ 62], 10.00th=[ 6409], 20.00th=[ 6477], 00:16:11.110 | 30.00th=[ 8658], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:16:11.110 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:16:11.110 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:11.110 | 99.99th=[10671] 00:16:11.110 lat (msec) : 100=5.26%, >=2000=94.74% 00:16:11.110 cpu : usr=0.00%, sys=0.09%, ctx=31, majf=0, minf=4865 00:16:11.110 IO depths : 1=5.3%, 2=10.5%, 4=21.1%, 8=42.1%, 16=21.1%, 32=0.0%, >=64=0.0% 00:16:11.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.110 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:11.110 issued rwts: total=19,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.110 job0: (groupid=0, jobs=1): err= 0: pid=2024596: Thu Apr 18 17:32:35 2024 00:16:11.110 read: IOPS=2, BW=2320KiB/s (2375kB/s)(24.0MiB/10594msec) 00:16:11.110 slat (usec): min=478, max=6391.5k, avg=439935.54, stdev=1390526.79 00:16:11.110 clat (msec): min=34, max=10584, avg=8952.41, stdev=2539.52 00:16:11.110 lat (msec): min=6426, max=10593, avg=9392.34, stdev=1704.87 00:16:11.110 clat percentiles (msec): 00:16:11.110 | 1.00th=[ 35], 5.00th=[ 6409], 10.00th=[ 6477], 20.00th=[ 6477], 00:16:11.110 | 30.00th=[ 8658], 40.00th=[ 8658], 50.00th=[10537], 60.00th=[10537], 00:16:11.110 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:16:11.110 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:16:11.110 | 99.99th=[10537] 00:16:11.110 lat (msec) : 50=4.17%, >=2000=95.83% 00:16:11.110 cpu : usr=0.00%, sys=0.14%, ctx=25, majf=0, minf=6145 00:16:11.110 IO depths : 1=4.2%, 2=8.3%, 4=16.7%, 8=33.3%, 16=37.5%, 32=0.0%, >=64=0.0% 00:16:11.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.110 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:11.110 issued rwts: total=24,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.110 job0: (groupid=0, jobs=1): err= 0: pid=2024597: Thu Apr 18 17:32:35 2024 00:16:11.110 read: IOPS=1, BW=1447KiB/s (1482kB/s)(15.0MiB/10614msec) 00:16:11.110 slat (usec): min=517, max=4224.9k, avg=703466.08, stdev=1294240.48 00:16:11.110 clat (msec): min=61, max=10613, avg=8259.18, stdev=3068.14 00:16:11.110 lat (msec): min=4286, max=10613, avg=8962.65, stdev=2116.50 00:16:11.110 clat percentiles (msec): 00:16:11.110 | 1.00th=[ 62], 5.00th=[ 62], 10.00th=[ 4279], 20.00th=[ 6477], 00:16:11.110 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[ 8658], 60.00th=[10671], 00:16:11.110 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:16:11.110 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:11.110 | 99.99th=[10671] 00:16:11.110 lat (msec) : 100=6.67%, >=2000=93.33% 00:16:11.110 cpu : usr=0.00%, sys=0.08%, ctx=29, majf=0, minf=3841 00:16:11.110 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.110 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.110 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.110 job0: (groupid=0, jobs=1): err= 0: pid=2024598: Thu Apr 18 17:32:35 2024 00:16:11.110 read: IOPS=51, BW=51.7MiB/s (54.2MB/s)(546MiB/10571msec) 00:16:11.110 slat (usec): min=62, max=4190.2k, avg=19187.29, stdev=213483.03 00:16:11.110 clat (msec): min=91, max=8790, avg=2339.02, stdev=3377.17 00:16:11.110 lat (msec): min=319, max=8793, avg=2358.21, stdev=3385.34 00:16:11.110 clat percentiles (msec): 00:16:11.110 | 1.00th=[ 330], 5.00th=[ 351], 10.00th=[ 368], 20.00th=[ 401], 00:16:11.110 | 30.00th=[ 414], 40.00th=[ 435], 50.00th=[ 481], 60.00th=[ 542], 00:16:11.110 | 70.00th=[ 667], 80.00th=[ 8288], 90.00th=[ 8557], 95.00th=[ 8792], 00:16:11.110 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:16:11.110 | 99.99th=[ 8792] 00:16:11.110 bw ( KiB/s): min= 4096, max=315392, per=5.90%, avg=142648.00, stdev=138871.69, samples=6 00:16:11.110 iops : min= 4, max= 308, avg=139.17, stdev=135.68, samples=6 00:16:11.110 lat (msec) : 100=0.18%, 500=54.21%, 750=20.88%, 1000=0.55%, >=2000=24.18% 00:16:11.110 cpu : usr=0.03%, sys=1.29%, ctx=329, majf=0, minf=32769 00:16:11.110 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.5% 00:16:11.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.110 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:11.110 issued rwts: total=546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.110 job1: (groupid=0, jobs=1): err= 0: pid=2024599: Thu Apr 18 17:32:35 2024 00:16:11.110 read: IOPS=2, BW=2338KiB/s (2395kB/s)(29.0MiB/12699msec) 00:16:11.110 slat (usec): min=470, max=6320.3k, avg=436630.36, stdev=1604805.84 00:16:11.110 clat (msec): min=36, max=12695, avg=11335.76, stdev=3087.45 00:16:11.110 lat (msec): min=6356, max=12698, avg=11772.39, stdev=2200.30 00:16:11.110 clat percentiles (msec): 00:16:11.110 | 1.00th=[ 37], 5.00th=[ 6342], 10.00th=[ 6342], 20.00th=[12550], 00:16:11.110 | 30.00th=[12550], 40.00th=[12550], 50.00th=[12684], 60.00th=[12684], 00:16:11.110 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:16:11.110 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:16:11.110 | 99.99th=[12684] 00:16:11.110 lat (msec) : 50=3.45%, >=2000=96.55% 00:16:11.110 cpu : usr=0.00%, sys=0.13%, ctx=43, majf=0, minf=7425 00:16:11.110 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:16:11.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.110 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:11.110 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.110 job1: (groupid=0, jobs=1): err= 0: pid=2024600: Thu Apr 18 17:32:35 2024 00:16:11.110 read: IOPS=15, BW=15.5MiB/s (16.3MB/s)(165MiB/10626msec) 00:16:11.110 slat (usec): min=77, max=4064.3k, avg=63816.89, stdev=409160.80 00:16:11.110 clat (msec): min=95, max=10455, avg=4899.00, stdev=1467.40 00:16:11.110 lat (msec): min=2247, max=10473, avg=4962.81, stdev=1482.19 00:16:11.110 clat percentiles (msec): 00:16:11.110 | 1.00th=[ 2232], 5.00th=[ 2265], 10.00th=[ 4178], 20.00th=[ 4178], 00:16:11.110 | 30.00th=[ 4212], 40.00th=[ 4212], 50.00th=[ 4245], 60.00th=[ 4279], 00:16:11.110 | 70.00th=[ 6342], 80.00th=[ 6342], 90.00th=[ 6409], 95.00th=[ 6409], 00:16:11.110 | 99.00th=[ 8658], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:16:11.110 | 99.99th=[10402] 00:16:11.110 bw ( KiB/s): min=75776, max=75776, per=3.14%, avg=75776.00, stdev= 0.00, samples=1 00:16:11.110 iops : min= 74, max= 74, avg=74.00, stdev= 0.00, samples=1 00:16:11.110 lat (msec) : 100=0.61%, >=2000=99.39% 00:16:11.110 cpu : usr=0.00%, sys=0.82%, ctx=114, majf=0, minf=32769 00:16:11.110 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.7%, 32=19.4%, >=64=61.8% 00:16:11.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.110 complete : 0=0.0%, 4=97.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.6% 00:16:11.110 issued rwts: total=165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.110 job1: (groupid=0, jobs=1): err= 0: pid=2024601: Thu Apr 18 17:32:35 2024 00:16:11.110 read: IOPS=26, BW=26.5MiB/s (27.8MB/s)(337MiB/12734msec) 00:16:11.110 slat (usec): min=51, max=9496.6k, avg=37734.07, stdev=523713.56 00:16:11.110 clat (msec): min=14, max=9999, avg=4477.01, stdev=4168.13 00:16:11.110 lat (msec): min=493, max=10001, avg=4514.75, stdev=4165.75 00:16:11.110 clat percentiles (msec): 00:16:11.110 | 1.00th=[ 489], 5.00th=[ 510], 10.00th=[ 535], 20.00th=[ 584], 00:16:11.110 | 30.00th=[ 651], 40.00th=[ 2198], 50.00th=[ 2232], 60.00th=[ 2567], 00:16:11.110 | 70.00th=[ 9597], 80.00th=[ 9731], 90.00th=[ 9866], 95.00th=[ 9866], 00:16:11.110 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:16:11.110 | 99.99th=[10000] 00:16:11.110 bw ( KiB/s): min=83968, max=233472, per=5.91%, avg=142677.33, stdev=79749.38, samples=3 00:16:11.110 iops : min= 82, max= 228, avg=139.33, stdev=77.88, samples=3 00:16:11.110 lat (msec) : 20=0.30%, 500=1.78%, 750=34.72%, >=2000=63.20% 00:16:11.110 cpu : usr=0.02%, sys=0.80%, ctx=246, majf=0, minf=32769 00:16:11.110 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.7%, 32=9.5%, >=64=81.3% 00:16:11.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.111 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:11.111 issued rwts: total=337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.111 job1: (groupid=0, jobs=1): err= 0: pid=2024602: Thu Apr 18 17:32:35 2024 00:16:11.111 read: IOPS=19, BW=19.2MiB/s (20.1MB/s)(204MiB/10648msec) 00:16:11.111 slat (usec): min=51, max=4204.8k, avg=51785.41, stdev=355755.57 00:16:11.111 clat (msec): min=81, max=10555, avg=6234.98, stdev=3736.65 00:16:11.111 lat (msec): min=1007, max=10629, avg=6286.76, stdev=3715.51 00:16:11.111 clat percentiles (msec): 00:16:11.111 | 1.00th=[ 1003], 5.00th=[ 1070], 10.00th=[ 1133], 20.00th=[ 1234], 00:16:11.111 | 30.00th=[ 1502], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[ 9194], 00:16:11.111 | 70.00th=[ 9329], 80.00th=[ 9463], 90.00th=[ 9731], 95.00th=[ 9731], 00:16:11.111 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[10537], 99.95th=[10537], 00:16:11.111 | 99.99th=[10537] 00:16:11.111 bw ( KiB/s): min= 4096, max=92160, per=1.29%, avg=31129.60, stdev=34858.14, samples=5 00:16:11.111 iops : min= 4, max= 90, avg=30.40, stdev=34.04, samples=5 00:16:11.111 lat (msec) : 100=0.49%, 2000=30.88%, >=2000=68.63% 00:16:11.111 cpu : usr=0.00%, sys=1.19%, ctx=270, majf=0, minf=32769 00:16:11.111 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=3.9%, 16=7.8%, 32=15.7%, >=64=69.1% 00:16:11.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.111 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:16:11.111 issued rwts: total=204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.111 job1: (groupid=0, jobs=1): err= 0: pid=2024603: Thu Apr 18 17:32:35 2024 00:16:11.111 read: IOPS=6, BW=6746KiB/s (6908kB/s)(70.0MiB/10625msec) 00:16:11.111 slat (usec): min=407, max=4253.8k, avg=151074.34, stdev=628605.07 00:16:11.111 clat (msec): min=48, max=10623, avg=4038.35, stdev=2804.55 00:16:11.111 lat (msec): min=1900, max=10624, avg=4189.43, stdev=2870.59 00:16:11.111 clat percentiles (msec): 00:16:11.111 | 1.00th=[ 49], 5.00th=[ 2022], 10.00th=[ 2022], 20.00th=[ 2039], 00:16:11.111 | 30.00th=[ 2165], 40.00th=[ 2198], 50.00th=[ 4111], 60.00th=[ 4111], 00:16:11.111 | 70.00th=[ 4279], 80.00th=[ 4279], 90.00th=[10537], 95.00th=[10671], 00:16:11.111 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:11.111 | 99.99th=[10671] 00:16:11.111 lat (msec) : 50=1.43%, 2000=2.86%, >=2000=95.71% 00:16:11.111 cpu : usr=0.00%, sys=0.32%, ctx=67, majf=0, minf=17921 00:16:11.111 IO depths : 1=1.4%, 2=2.9%, 4=5.7%, 8=11.4%, 16=22.9%, 32=45.7%, >=64=10.0% 00:16:11.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.111 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:11.111 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.111 job1: (groupid=0, jobs=1): err= 0: pid=2024604: Thu Apr 18 17:32:35 2024 00:16:11.111 read: IOPS=0, BW=646KiB/s (661kB/s)(8192KiB/12682msec) 00:16:11.111 slat (msec): min=8, max=10641, avg=1583.27, stdev=3717.16 00:16:11.111 clat (msec): min=15, max=12620, avg=10075.78, stdev=4174.96 00:16:11.111 lat (msec): min=10656, max=12681, avg=11659.05, stdev=1037.07 00:16:11.111 clat percentiles (msec): 00:16:11.111 | 1.00th=[ 16], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[10671], 00:16:11.111 | 30.00th=[10671], 40.00th=[10671], 50.00th=[10671], 60.00th=[10671], 00:16:11.111 | 70.00th=[12550], 80.00th=[12550], 90.00th=[12684], 95.00th=[12684], 00:16:11.111 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:16:11.111 | 99.99th=[12684] 00:16:11.111 lat (msec) : 20=12.50%, >=2000=87.50% 00:16:11.111 cpu : usr=0.00%, sys=0.05%, ctx=25, majf=0, minf=2049 00:16:11.111 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.111 complete : 0=0.0%, 4=0.0%, 8=100.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.111 issued rwts: total=8,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.111 job1: (groupid=0, jobs=1): err= 0: pid=2024605: Thu Apr 18 17:32:35 2024 00:16:11.111 read: IOPS=2, BW=2501KiB/s (2561kB/s)(26.0MiB/10647msec) 00:16:11.111 slat (usec): min=470, max=6151.8k, avg=408764.02, stdev=1436945.05 00:16:11.111 clat (msec): min=18, max=10645, avg=9448.85, stdev=2807.11 00:16:11.111 lat (msec): min=4274, max=10646, avg=9857.61, stdev=2050.93 00:16:11.111 clat percentiles (msec): 00:16:11.111 | 1.00th=[ 19], 5.00th=[ 4279], 10.00th=[ 4279], 20.00th=[10402], 00:16:11.111 | 30.00th=[10537], 40.00th=[10537], 50.00th=[10671], 60.00th=[10671], 00:16:11.111 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:16:11.111 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:11.111 | 99.99th=[10671] 00:16:11.111 lat (msec) : 20=3.85%, >=2000=96.15% 00:16:11.111 cpu : usr=0.00%, sys=0.14%, ctx=40, majf=0, minf=6657 00:16:11.111 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:16:11.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.111 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:11.111 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.111 job1: (groupid=0, jobs=1): err= 0: pid=2024606: Thu Apr 18 17:32:35 2024 00:16:11.111 read: IOPS=231, BW=231MiB/s (243MB/s)(2347MiB/10144msec) 00:16:11.111 slat (usec): min=43, max=1748.5k, avg=4292.93, stdev=38051.35 00:16:11.111 clat (msec): min=50, max=2587, avg=460.81, stdev=317.64 00:16:11.111 lat (msec): min=117, max=2589, avg=465.10, stdev=321.26 00:16:11.111 clat percentiles (msec): 00:16:11.111 | 1.00th=[ 153], 5.00th=[ 205], 10.00th=[ 243], 20.00th=[ 279], 00:16:11.111 | 30.00th=[ 309], 40.00th=[ 334], 50.00th=[ 376], 60.00th=[ 435], 00:16:11.111 | 70.00th=[ 514], 80.00th=[ 609], 90.00th=[ 718], 95.00th=[ 818], 00:16:11.111 | 99.00th=[ 2500], 99.50th=[ 2567], 99.90th=[ 2601], 99.95th=[ 2601], 00:16:11.111 | 99.99th=[ 2601] 00:16:11.111 bw ( KiB/s): min=153600, max=473088, per=12.54%, avg=302967.47, stdev=111676.96, samples=15 00:16:11.111 iops : min= 150, max= 462, avg=295.87, stdev=109.06, samples=15 00:16:11.111 lat (msec) : 100=0.04%, 250=12.23%, 500=54.71%, 750=24.63%, 1000=6.77% 00:16:11.111 lat (msec) : >=2000=1.62% 00:16:11.111 cpu : usr=0.13%, sys=2.82%, ctx=1227, majf=0, minf=32769 00:16:11.111 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:16:11.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.111 issued rwts: total=2347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.111 job1: (groupid=0, jobs=1): err= 0: pid=2024607: Thu Apr 18 17:32:35 2024 00:16:11.111 read: IOPS=4, BW=4896KiB/s (5014kB/s)(51.0MiB/10666msec) 00:16:11.111 slat (usec): min=552, max=4167.9k, avg=207172.76, stdev=739497.04 00:16:11.111 clat (msec): min=99, max=10664, avg=9387.09, stdev=2290.42 00:16:11.111 lat (msec): min=4267, max=10665, avg=9594.26, stdev=1873.45 00:16:11.111 clat percentiles (msec): 00:16:11.111 | 1.00th=[ 101], 5.00th=[ 4279], 10.00th=[ 6409], 20.00th=[ 8557], 00:16:11.111 | 30.00th=[10402], 40.00th=[10402], 50.00th=[10402], 60.00th=[10537], 00:16:11.111 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:16:11.111 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:11.111 | 99.99th=[10671] 00:16:11.111 lat (msec) : 100=1.96%, >=2000=98.04% 00:16:11.111 cpu : usr=0.00%, sys=0.43%, ctx=86, majf=0, minf=13057 00:16:11.111 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:16:11.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.111 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:11.111 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.111 job1: (groupid=0, jobs=1): err= 0: pid=2024608: Thu Apr 18 17:32:35 2024 00:16:11.111 read: IOPS=1, BW=1616KiB/s (1654kB/s)(20.0MiB/12677msec) 00:16:11.111 slat (usec): min=459, max=6328.3k, avg=632512.53, stdev=1938940.13 00:16:11.111 clat (msec): min=26, max=12674, avg=10776.00, stdev=3605.68 00:16:11.111 lat (msec): min=6354, max=12676, avg=11408.52, stdev=2586.06 00:16:11.111 clat percentiles (msec): 00:16:11.111 | 1.00th=[ 27], 5.00th=[ 27], 10.00th=[ 6342], 20.00th=[ 6342], 00:16:11.111 | 30.00th=[12684], 40.00th=[12684], 50.00th=[12684], 60.00th=[12684], 00:16:11.111 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:16:11.111 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:16:11.111 | 99.99th=[12684] 00:16:11.111 lat (msec) : 50=5.00%, >=2000=95.00% 00:16:11.111 cpu : usr=0.00%, sys=0.09%, ctx=30, majf=0, minf=5121 00:16:11.111 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:16:11.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.111 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:11.111 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.111 job1: (groupid=0, jobs=1): err= 0: pid=2024609: Thu Apr 18 17:32:35 2024 00:16:11.111 read: IOPS=2, BW=2218KiB/s (2271kB/s)(23.0MiB/10618msec) 00:16:11.111 slat (usec): min=512, max=4205.4k, avg=458064.53, stdev=1071800.44 00:16:11.111 clat (msec): min=81, max=10584, avg=7206.74, stdev=2433.61 00:16:11.111 lat (msec): min=4286, max=10616, avg=7664.80, stdev=1980.88 00:16:11.111 clat percentiles (msec): 00:16:11.111 | 1.00th=[ 82], 5.00th=[ 4279], 10.00th=[ 6342], 20.00th=[ 6342], 00:16:11.111 | 30.00th=[ 6342], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 6409], 00:16:11.111 | 70.00th=[ 8658], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:16:11.111 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:16:11.111 | 99.99th=[10537] 00:16:11.111 lat (msec) : 100=4.35%, >=2000=95.65% 00:16:11.111 cpu : usr=0.00%, sys=0.16%, ctx=53, majf=0, minf=5889 00:16:11.111 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:16:11.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.111 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:11.111 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.111 job1: (groupid=0, jobs=1): err= 0: pid=2024610: Thu Apr 18 17:32:35 2024 00:16:11.112 read: IOPS=43, BW=43.4MiB/s (45.5MB/s)(459MiB/10575msec) 00:16:11.112 slat (usec): min=49, max=4070.2k, avg=22820.62, stdev=233199.58 00:16:11.112 clat (msec): min=95, max=8842, avg=2771.65, stdev=3422.53 00:16:11.112 lat (msec): min=244, max=8844, avg=2794.47, stdev=3429.70 00:16:11.112 clat percentiles (msec): 00:16:11.112 | 1.00th=[ 247], 5.00th=[ 288], 10.00th=[ 321], 20.00th=[ 347], 00:16:11.112 | 30.00th=[ 376], 40.00th=[ 422], 50.00th=[ 693], 60.00th=[ 1028], 00:16:11.112 | 70.00th=[ 4279], 80.00th=[ 8658], 90.00th=[ 8792], 95.00th=[ 8792], 00:16:11.112 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:16:11.112 | 99.99th=[ 8792] 00:16:11.112 bw ( KiB/s): min=10240, max=352256, per=5.61%, avg=135577.60, stdev=160336.30, samples=5 00:16:11.112 iops : min= 10, max= 344, avg=132.40, stdev=156.58, samples=5 00:16:11.112 lat (msec) : 100=0.22%, 250=1.09%, 500=43.14%, 750=8.06%, 1000=5.66% 00:16:11.112 lat (msec) : 2000=8.50%, >=2000=33.33% 00:16:11.112 cpu : usr=0.02%, sys=1.46%, ctx=412, majf=0, minf=32769 00:16:11.112 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=7.0%, >=64=86.3% 00:16:11.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.112 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:11.112 issued rwts: total=459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.112 job1: (groupid=0, jobs=1): err= 0: pid=2024611: Thu Apr 18 17:32:35 2024 00:16:11.112 read: IOPS=71, BW=71.7MiB/s (75.2MB/s)(912MiB/12724msec) 00:16:11.112 slat (usec): min=50, max=6360.7k, avg=13927.27, stdev=253672.38 00:16:11.112 clat (msec): min=15, max=11021, avg=1734.98, stdev=3682.87 00:16:11.112 lat (msec): min=131, max=11037, avg=1748.90, stdev=3694.55 00:16:11.112 clat percentiles (msec): 00:16:11.112 | 1.00th=[ 138], 5.00th=[ 155], 10.00th=[ 178], 20.00th=[ 218], 00:16:11.112 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 266], 60.00th=[ 275], 00:16:11.112 | 70.00th=[ 292], 80.00th=[ 317], 90.00th=[10805], 95.00th=[10939], 00:16:11.112 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:16:11.112 | 99.99th=[11073] 00:16:11.112 bw ( KiB/s): min= 2048, max=561152, per=11.08%, avg=267605.33, stdev=276225.53, samples=6 00:16:11.112 iops : min= 2, max= 548, avg=261.33, stdev=269.75, samples=6 00:16:11.112 lat (msec) : 20=0.11%, 250=42.87%, 500=42.98%, >=2000=14.04% 00:16:11.112 cpu : usr=0.05%, sys=1.30%, ctx=510, majf=0, minf=32769 00:16:11.112 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:16:11.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.112 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.112 issued rwts: total=912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.112 job2: (groupid=0, jobs=1): err= 0: pid=2024612: Thu Apr 18 17:32:35 2024 00:16:11.112 read: IOPS=11, BW=11.9MiB/s (12.5MB/s)(127MiB/10674msec) 00:16:11.112 slat (usec): min=439, max=2128.2k, avg=83503.11, stdev=369823.63 00:16:11.112 clat (msec): min=67, max=10672, avg=6837.28, stdev=4031.10 00:16:11.112 lat (msec): min=1571, max=10673, avg=6920.79, stdev=3999.48 00:16:11.112 clat percentiles (msec): 00:16:11.112 | 1.00th=[ 1569], 5.00th=[ 1687], 10.00th=[ 1737], 20.00th=[ 1989], 00:16:11.112 | 30.00th=[ 2022], 40.00th=[ 4329], 50.00th=[ 8658], 60.00th=[10402], 00:16:11.112 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:16:11.112 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:11.112 | 99.99th=[10671] 00:16:11.112 lat (msec) : 100=0.79%, 2000=25.20%, >=2000=74.02% 00:16:11.112 cpu : usr=0.01%, sys=0.84%, ctx=146, majf=0, minf=32513 00:16:11.112 IO depths : 1=0.8%, 2=1.6%, 4=3.1%, 8=6.3%, 16=12.6%, 32=25.2%, >=64=50.4% 00:16:11.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.112 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:11.112 issued rwts: total=127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.112 job2: (groupid=0, jobs=1): err= 0: pid=2024613: Thu Apr 18 17:32:35 2024 00:16:11.112 read: IOPS=13, BW=14.0MiB/s (14.6MB/s)(147MiB/10530msec) 00:16:11.112 slat (usec): min=98, max=2089.3k, avg=71506.15, stdev=346589.07 00:16:11.112 clat (msec): min=17, max=10407, avg=6902.70, stdev=2048.57 00:16:11.112 lat (msec): min=2055, max=10438, avg=6974.20, stdev=1986.68 00:16:11.112 clat percentiles (msec): 00:16:11.112 | 1.00th=[ 2056], 5.00th=[ 2089], 10.00th=[ 4279], 20.00th=[ 4279], 00:16:11.112 | 30.00th=[ 6342], 40.00th=[ 7953], 50.00th=[ 8020], 60.00th=[ 8087], 00:16:11.112 | 70.00th=[ 8154], 80.00th=[ 8221], 90.00th=[ 8356], 95.00th=[ 8356], 00:16:11.112 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:16:11.112 | 99.99th=[10402] 00:16:11.112 bw ( KiB/s): min= 2048, max=20480, per=0.57%, avg=13653.33, stdev=10102.54, samples=3 00:16:11.112 iops : min= 2, max= 20, avg=13.33, stdev= 9.87, samples=3 00:16:11.112 lat (msec) : 20=0.68%, >=2000=99.32% 00:16:11.112 cpu : usr=0.00%, sys=0.76%, ctx=116, majf=0, minf=32769 00:16:11.112 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=5.4%, 16=10.9%, 32=21.8%, >=64=57.1% 00:16:11.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.112 complete : 0=0.0%, 4=95.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.8% 00:16:11.112 issued rwts: total=147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.112 job2: (groupid=0, jobs=1): err= 0: pid=2024614: Thu Apr 18 17:32:35 2024 00:16:11.112 read: IOPS=1, BW=1298KiB/s (1329kB/s)(16.0MiB/12623msec) 00:16:11.112 slat (usec): min=1016, max=4309.6k, avg=662244.67, stdev=1267755.55 00:16:11.112 clat (msec): min=2026, max=12606, avg=11009.60, stdev=3003.45 00:16:11.112 lat (msec): min=6335, max=12622, avg=11671.85, stdev=1829.21 00:16:11.112 clat percentiles (msec): 00:16:11.112 | 1.00th=[ 2022], 5.00th=[ 2022], 10.00th=[ 6342], 20.00th=[10671], 00:16:11.112 | 30.00th=[10671], 40.00th=[12550], 50.00th=[12550], 60.00th=[12550], 00:16:11.112 | 70.00th=[12550], 80.00th=[12550], 90.00th=[12550], 95.00th=[12550], 00:16:11.112 | 99.00th=[12550], 99.50th=[12550], 99.90th=[12550], 99.95th=[12550], 00:16:11.112 | 99.99th=[12550] 00:16:11.112 lat (msec) : >=2000=100.00% 00:16:11.112 cpu : usr=0.00%, sys=0.11%, ctx=29, majf=0, minf=4097 00:16:11.112 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:16:11.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.112 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.112 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.112 job2: (groupid=0, jobs=1): err= 0: pid=2024615: Thu Apr 18 17:32:35 2024 00:16:11.112 read: IOPS=2, BW=2432KiB/s (2490kB/s)(30.0MiB/12632msec) 00:16:11.112 slat (usec): min=448, max=4246.8k, avg=353164.83, stdev=961842.83 00:16:11.112 clat (msec): min=2036, max=12630, avg=9320.41, stdev=3825.15 00:16:11.112 lat (msec): min=4141, max=12631, avg=9673.58, stdev=3612.65 00:16:11.112 clat percentiles (msec): 00:16:11.112 | 1.00th=[ 2039], 5.00th=[ 4144], 10.00th=[ 4144], 20.00th=[ 4144], 00:16:11.112 | 30.00th=[ 6342], 40.00th=[ 6342], 50.00th=[12416], 60.00th=[12684], 00:16:11.112 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:16:11.112 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:16:11.112 | 99.99th=[12684] 00:16:11.112 lat (msec) : >=2000=100.00% 00:16:11.112 cpu : usr=0.01%, sys=0.12%, ctx=42, majf=0, minf=7681 00:16:11.112 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:16:11.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.112 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:11.112 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.112 job2: (groupid=0, jobs=1): err= 0: pid=2024616: Thu Apr 18 17:32:35 2024 00:16:11.112 read: IOPS=2, BW=2712KiB/s (2777kB/s)(28.0MiB/10572msec) 00:16:11.112 slat (usec): min=708, max=2153.3k, avg=377256.71, stdev=802820.91 00:16:11.112 clat (msec): min=7, max=10570, avg=7956.80, stdev=2946.29 00:16:11.112 lat (msec): min=2101, max=10570, avg=8334.06, stdev=2538.86 00:16:11.112 clat percentiles (msec): 00:16:11.112 | 1.00th=[ 8], 5.00th=[ 2106], 10.00th=[ 4245], 20.00th=[ 6409], 00:16:11.112 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[10537], 00:16:11.112 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:16:11.112 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:16:11.112 | 99.99th=[10537] 00:16:11.112 lat (msec) : 10=3.57%, >=2000=96.43% 00:16:11.112 cpu : usr=0.00%, sys=0.20%, ctx=55, majf=0, minf=7169 00:16:11.112 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:16:11.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.112 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:11.112 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.112 job2: (groupid=0, jobs=1): err= 0: pid=2024617: Thu Apr 18 17:32:35 2024 00:16:11.112 read: IOPS=1, BW=1133KiB/s (1160kB/s)(14.0MiB/12653msec) 00:16:11.112 slat (usec): min=509, max=6138.7k, avg=756446.06, stdev=1725822.96 00:16:11.112 clat (msec): min=2061, max=12642, avg=9309.20, stdev=4113.02 00:16:11.112 lat (msec): min=4147, max=12651, avg=10065.65, stdev=3622.12 00:16:11.112 clat percentiles (msec): 00:16:11.112 | 1.00th=[ 2056], 5.00th=[ 2056], 10.00th=[ 4144], 20.00th=[ 4178], 00:16:11.112 | 30.00th=[ 6342], 40.00th=[ 6342], 50.00th=[12550], 60.00th=[12684], 00:16:11.112 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:16:11.112 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:16:11.112 | 99.99th=[12684] 00:16:11.112 lat (msec) : >=2000=100.00% 00:16:11.112 cpu : usr=0.00%, sys=0.09%, ctx=33, majf=0, minf=3585 00:16:11.112 IO depths : 1=7.1%, 2=14.3%, 4=28.6%, 8=50.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:11.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.112 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.112 issued rwts: total=14,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.112 job2: (groupid=0, jobs=1): err= 0: pid=2024618: Thu Apr 18 17:32:35 2024 00:16:11.112 read: IOPS=2, BW=2176KiB/s (2228kB/s)(27.0MiB/12705msec) 00:16:11.112 slat (usec): min=693, max=6138.7k, avg=395403.72, stdev=1413420.18 00:16:11.113 clat (msec): min=2028, max=12702, avg=11804.96, stdev=2581.33 00:16:11.113 lat (msec): min=6324, max=12704, avg=12200.37, stdev=1690.00 00:16:11.113 clat percentiles (msec): 00:16:11.113 | 1.00th=[ 2022], 5.00th=[ 6342], 10.00th=[ 6342], 20.00th=[12684], 00:16:11.113 | 30.00th=[12684], 40.00th=[12684], 50.00th=[12684], 60.00th=[12684], 00:16:11.113 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:16:11.113 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:16:11.113 | 99.99th=[12684] 00:16:11.113 lat (msec) : >=2000=100.00% 00:16:11.113 cpu : usr=0.00%, sys=0.21%, ctx=40, majf=0, minf=6913 00:16:11.113 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:16:11.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.113 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:11.113 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.113 job2: (groupid=0, jobs=1): err= 0: pid=2024619: Thu Apr 18 17:32:35 2024 00:16:11.113 read: IOPS=12, BW=12.9MiB/s (13.5MB/s)(163MiB/12678msec) 00:16:11.113 slat (usec): min=100, max=2149.5k, avg=65338.13, stdev=313269.18 00:16:11.113 clat (msec): min=2026, max=10652, avg=5217.86, stdev=2521.95 00:16:11.113 lat (msec): min=2867, max=12448, avg=5283.20, stdev=2574.40 00:16:11.113 clat percentiles (msec): 00:16:11.113 | 1.00th=[ 2869], 5.00th=[ 3004], 10.00th=[ 3104], 20.00th=[ 3239], 00:16:11.113 | 30.00th=[ 3339], 40.00th=[ 3507], 50.00th=[ 3742], 60.00th=[ 4144], 00:16:11.113 | 70.00th=[ 6342], 80.00th=[ 8356], 90.00th=[ 9463], 95.00th=[ 9597], 00:16:11.113 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:11.113 | 99.99th=[10671] 00:16:11.113 bw ( KiB/s): min= 2048, max=55296, per=1.02%, avg=24576.00, stdev=27553.02, samples=3 00:16:11.113 iops : min= 2, max= 54, avg=24.00, stdev=26.91, samples=3 00:16:11.113 lat (msec) : >=2000=100.00% 00:16:11.113 cpu : usr=0.00%, sys=0.78%, ctx=177, majf=0, minf=32769 00:16:11.113 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=4.9%, 16=9.8%, 32=19.6%, >=64=61.3% 00:16:11.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.113 complete : 0=0.0%, 4=97.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.7% 00:16:11.113 issued rwts: total=163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.113 job2: (groupid=0, jobs=1): err= 0: pid=2024620: Thu Apr 18 17:32:35 2024 00:16:11.113 read: IOPS=24, BW=24.1MiB/s (25.2MB/s)(243MiB/10101msec) 00:16:11.113 slat (usec): min=49, max=2125.7k, avg=41228.86, stdev=260022.19 00:16:11.113 clat (msec): min=80, max=9387, avg=2007.80, stdev=2946.24 00:16:11.113 lat (msec): min=100, max=9407, avg=2049.03, stdev=2981.43 00:16:11.113 clat percentiles (msec): 00:16:11.113 | 1.00th=[ 103], 5.00th=[ 136], 10.00th=[ 220], 20.00th=[ 405], 00:16:11.113 | 30.00th=[ 550], 40.00th=[ 768], 50.00th=[ 936], 60.00th=[ 1028], 00:16:11.113 | 70.00th=[ 1099], 80.00th=[ 1167], 90.00th=[ 9194], 95.00th=[ 9329], 00:16:11.113 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:16:11.113 | 99.99th=[ 9329] 00:16:11.113 bw ( KiB/s): min=82084, max=153600, per=4.88%, avg=117842.00, stdev=50569.45, samples=2 00:16:11.113 iops : min= 80, max= 150, avg=115.00, stdev=49.50, samples=2 00:16:11.113 lat (msec) : 100=0.41%, 250=11.93%, 500=12.35%, 750=13.58%, 1000=17.28% 00:16:11.113 lat (msec) : 2000=26.34%, >=2000=18.11% 00:16:11.113 cpu : usr=0.00%, sys=0.73%, ctx=197, majf=0, minf=32769 00:16:11.113 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.3%, 16=6.6%, 32=13.2%, >=64=74.1% 00:16:11.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.113 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:16:11.113 issued rwts: total=243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.113 job2: (groupid=0, jobs=1): err= 0: pid=2024621: Thu Apr 18 17:32:35 2024 00:16:11.113 read: IOPS=7, BW=7577KiB/s (7759kB/s)(79.0MiB/10676msec) 00:16:11.113 slat (usec): min=463, max=2141.3k, avg=134803.97, stdev=500086.96 00:16:11.113 clat (msec): min=25, max=10674, avg=8180.28, stdev=3246.72 00:16:11.113 lat (msec): min=2111, max=10675, avg=8315.09, stdev=3122.51 00:16:11.113 clat percentiles (msec): 00:16:11.113 | 1.00th=[ 26], 5.00th=[ 4144], 10.00th=[ 4144], 20.00th=[ 4329], 00:16:11.113 | 30.00th=[ 4329], 40.00th=[10537], 50.00th=[10537], 60.00th=[10671], 00:16:11.113 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:16:11.113 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:11.113 | 99.99th=[10671] 00:16:11.113 lat (msec) : 50=1.27%, >=2000=98.73% 00:16:11.113 cpu : usr=0.00%, sys=0.62%, ctx=81, majf=0, minf=20225 00:16:11.113 IO depths : 1=1.3%, 2=2.5%, 4=5.1%, 8=10.1%, 16=20.3%, 32=40.5%, >=64=20.3% 00:16:11.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.113 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:11.113 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.113 job2: (groupid=0, jobs=1): err= 0: pid=2024622: Thu Apr 18 17:32:35 2024 00:16:11.113 read: IOPS=55, BW=55.1MiB/s (57.8MB/s)(557MiB/10112msec) 00:16:11.113 slat (usec): min=45, max=1904.6k, avg=17957.98, stdev=139018.65 00:16:11.113 clat (msec): min=104, max=4443, avg=2076.57, stdev=1295.84 00:16:11.113 lat (msec): min=131, max=4446, avg=2094.53, stdev=1299.95 00:16:11.113 clat percentiles (msec): 00:16:11.113 | 1.00th=[ 140], 5.00th=[ 347], 10.00th=[ 514], 20.00th=[ 911], 00:16:11.113 | 30.00th=[ 995], 40.00th=[ 1972], 50.00th=[ 2056], 60.00th=[ 2165], 00:16:11.113 | 70.00th=[ 2366], 80.00th=[ 2903], 90.00th=[ 4329], 95.00th=[ 4396], 00:16:11.113 | 99.00th=[ 4396], 99.50th=[ 4396], 99.90th=[ 4463], 99.95th=[ 4463], 00:16:11.113 | 99.99th=[ 4463] 00:16:11.113 bw ( KiB/s): min= 8192, max=145408, per=3.64%, avg=88064.00, stdev=54868.74, samples=10 00:16:11.113 iops : min= 8, max= 142, avg=86.00, stdev=53.58, samples=10 00:16:11.113 lat (msec) : 250=3.05%, 500=6.28%, 750=7.00%, 1000=14.54%, 2000=11.31% 00:16:11.113 lat (msec) : >=2000=57.81% 00:16:11.113 cpu : usr=0.03%, sys=1.39%, ctx=406, majf=0, minf=32769 00:16:11.113 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.7% 00:16:11.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.113 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:11.113 issued rwts: total=557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.113 job2: (groupid=0, jobs=1): err= 0: pid=2024623: Thu Apr 18 17:32:35 2024 00:16:11.113 read: IOPS=12, BW=12.6MiB/s (13.2MB/s)(134MiB/10613msec) 00:16:11.113 slat (usec): min=400, max=2141.3k, avg=78858.53, stdev=357201.86 00:16:11.113 clat (msec): min=45, max=10575, avg=7556.74, stdev=2289.24 00:16:11.113 lat (msec): min=2083, max=10582, avg=7635.60, stdev=2208.78 00:16:11.113 clat percentiles (msec): 00:16:11.113 | 1.00th=[ 2089], 5.00th=[ 4144], 10.00th=[ 5873], 20.00th=[ 6007], 00:16:11.113 | 30.00th=[ 6141], 40.00th=[ 6275], 50.00th=[ 6409], 60.00th=[ 8288], 00:16:11.113 | 70.00th=[10134], 80.00th=[10268], 90.00th=[10268], 95.00th=[10537], 00:16:11.113 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:16:11.113 | 99.99th=[10537] 00:16:11.113 bw ( KiB/s): min= 6144, max= 6144, per=0.25%, avg=6144.00, stdev= 0.00, samples=2 00:16:11.113 iops : min= 6, max= 6, avg= 6.00, stdev= 0.00, samples=2 00:16:11.113 lat (msec) : 50=0.75%, >=2000=99.25% 00:16:11.113 cpu : usr=0.00%, sys=0.85%, ctx=115, majf=0, minf=32769 00:16:11.113 IO depths : 1=0.7%, 2=1.5%, 4=3.0%, 8=6.0%, 16=11.9%, 32=23.9%, >=64=53.0% 00:16:11.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.113 complete : 0=0.0%, 4=87.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=12.5% 00:16:11.113 issued rwts: total=134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.113 job2: (groupid=0, jobs=1): err= 0: pid=2024624: Thu Apr 18 17:32:35 2024 00:16:11.113 read: IOPS=8, BW=8229KiB/s (8427kB/s)(85.0MiB/10577msec) 00:16:11.113 slat (usec): min=434, max=2096.4k, avg=124051.75, stdev=464145.64 00:16:11.113 clat (msec): min=31, max=10575, avg=3919.34, stdev=2346.42 00:16:11.114 lat (msec): min=1863, max=10575, avg=4043.39, stdev=2416.13 00:16:11.114 clat percentiles (msec): 00:16:11.114 | 1.00th=[ 32], 5.00th=[ 1888], 10.00th=[ 1989], 20.00th=[ 2022], 00:16:11.114 | 30.00th=[ 2165], 40.00th=[ 2165], 50.00th=[ 4111], 60.00th=[ 4111], 00:16:11.114 | 70.00th=[ 4245], 80.00th=[ 4279], 90.00th=[ 6409], 95.00th=[10402], 00:16:11.114 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:16:11.114 | 99.99th=[10537] 00:16:11.114 lat (msec) : 50=1.18%, 2000=9.41%, >=2000=89.41% 00:16:11.114 cpu : usr=0.00%, sys=0.58%, ctx=87, majf=0, minf=21761 00:16:11.114 IO depths : 1=1.2%, 2=2.4%, 4=4.7%, 8=9.4%, 16=18.8%, 32=37.6%, >=64=25.9% 00:16:11.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.114 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:11.114 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.114 job3: (groupid=0, jobs=1): err= 0: pid=2024625: Thu Apr 18 17:32:35 2024 00:16:11.114 read: IOPS=145, BW=145MiB/s (152MB/s)(1460MiB/10047msec) 00:16:11.114 slat (usec): min=51, max=1775.0k, avg=6843.76, stdev=59988.39 00:16:11.114 clat (msec): min=45, max=4804, avg=707.46, stdev=570.27 00:16:11.114 lat (msec): min=52, max=4946, avg=714.31, stdev=579.36 00:16:11.114 clat percentiles (msec): 00:16:11.114 | 1.00th=[ 136], 5.00th=[ 205], 10.00th=[ 249], 20.00th=[ 305], 00:16:11.114 | 30.00th=[ 351], 40.00th=[ 477], 50.00th=[ 584], 60.00th=[ 701], 00:16:11.114 | 70.00th=[ 751], 80.00th=[ 818], 90.00th=[ 1720], 95.00th=[ 2299], 00:16:11.114 | 99.00th=[ 2567], 99.50th=[ 2567], 99.90th=[ 3037], 99.95th=[ 4799], 00:16:11.114 | 99.99th=[ 4799] 00:16:11.114 bw ( KiB/s): min=16384, max=430941, per=8.07%, avg=195060.36, stdev=131075.74, samples=14 00:16:11.114 iops : min= 16, max= 420, avg=190.43, stdev=127.89, samples=14 00:16:11.114 lat (msec) : 50=0.07%, 100=0.82%, 250=9.66%, 500=31.16%, 750=28.49% 00:16:11.114 lat (msec) : 1000=19.18%, 2000=2.81%, >=2000=7.81% 00:16:11.114 cpu : usr=0.07%, sys=1.71%, ctx=1107, majf=0, minf=32769 00:16:11.114 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:16:11.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.114 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.114 issued rwts: total=1460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.114 job3: (groupid=0, jobs=1): err= 0: pid=2024626: Thu Apr 18 17:32:35 2024 00:16:11.114 read: IOPS=10, BW=10.2MiB/s (10.7MB/s)(108MiB/10549msec) 00:16:11.114 slat (usec): min=468, max=2088.3k, avg=97073.15, stdev=407524.02 00:16:11.114 clat (msec): min=64, max=10547, avg=4649.24, stdev=2684.11 00:16:11.114 lat (msec): min=1960, max=10548, avg=4746.31, stdev=2706.23 00:16:11.114 clat percentiles (msec): 00:16:11.114 | 1.00th=[ 1955], 5.00th=[ 1972], 10.00th=[ 1972], 20.00th=[ 2123], 00:16:11.114 | 30.00th=[ 3876], 40.00th=[ 3910], 50.00th=[ 4010], 60.00th=[ 4111], 00:16:11.114 | 70.00th=[ 4245], 80.00th=[ 6409], 90.00th=[10537], 95.00th=[10537], 00:16:11.114 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:16:11.114 | 99.99th=[10537] 00:16:11.114 lat (msec) : 100=0.93%, 2000=10.19%, >=2000=88.89% 00:16:11.114 cpu : usr=0.01%, sys=0.82%, ctx=101, majf=0, minf=27649 00:16:11.114 IO depths : 1=0.9%, 2=1.9%, 4=3.7%, 8=7.4%, 16=14.8%, 32=29.6%, >=64=41.7% 00:16:11.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.114 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:11.114 issued rwts: total=108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.114 job3: (groupid=0, jobs=1): err= 0: pid=2024627: Thu Apr 18 17:32:35 2024 00:16:11.114 read: IOPS=79, BW=79.8MiB/s (83.7MB/s)(809MiB/10140msec) 00:16:11.114 slat (usec): min=59, max=2096.6k, avg=12449.03, stdev=119948.07 00:16:11.114 clat (msec): min=63, max=9418, avg=1317.73, stdev=1797.74 00:16:11.114 lat (msec): min=140, max=9425, avg=1330.18, stdev=1812.15 00:16:11.114 clat percentiles (msec): 00:16:11.114 | 1.00th=[ 150], 5.00th=[ 334], 10.00th=[ 359], 20.00th=[ 393], 00:16:11.114 | 30.00th=[ 401], 40.00th=[ 422], 50.00th=[ 447], 60.00th=[ 481], 00:16:11.114 | 70.00th=[ 818], 80.00th=[ 1301], 90.00th=[ 4933], 95.00th=[ 5201], 00:16:11.114 | 99.00th=[ 7282], 99.50th=[ 9194], 99.90th=[ 9463], 99.95th=[ 9463], 00:16:11.114 | 99.99th=[ 9463] 00:16:11.114 bw ( KiB/s): min=14336, max=303104, per=5.77%, avg=139446.70, stdev=110288.08, samples=10 00:16:11.114 iops : min= 14, max= 296, avg=136.10, stdev=107.73, samples=10 00:16:11.114 lat (msec) : 100=0.12%, 250=3.46%, 500=60.44%, 750=5.69%, 1000=4.20% 00:16:11.114 lat (msec) : 2000=6.30%, >=2000=19.78% 00:16:11.114 cpu : usr=0.06%, sys=1.36%, ctx=682, majf=0, minf=32769 00:16:11.114 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:16:11.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.114 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.114 issued rwts: total=809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.114 job3: (groupid=0, jobs=1): err= 0: pid=2024628: Thu Apr 18 17:32:35 2024 00:16:11.114 read: IOPS=5, BW=5995KiB/s (6139kB/s)(74.0MiB/12640msec) 00:16:11.114 slat (usec): min=466, max=2114.1k, avg=143369.88, stdev=499884.65 00:16:11.114 clat (msec): min=2030, max=12637, avg=9180.45, stdev=3200.51 00:16:11.114 lat (msec): min=4144, max=12639, avg=9323.82, stdev=3112.24 00:16:11.114 clat percentiles (msec): 00:16:11.114 | 1.00th=[ 2039], 5.00th=[ 4144], 10.00th=[ 6007], 20.00th=[ 6074], 00:16:11.114 | 30.00th=[ 6208], 40.00th=[ 6342], 50.00th=[ 8490], 60.00th=[12416], 00:16:11.114 | 70.00th=[12550], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:16:11.114 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:16:11.114 | 99.99th=[12684] 00:16:11.114 lat (msec) : >=2000=100.00% 00:16:11.114 cpu : usr=0.01%, sys=0.47%, ctx=110, majf=0, minf=18945 00:16:11.114 IO depths : 1=1.4%, 2=2.7%, 4=5.4%, 8=10.8%, 16=21.6%, 32=43.2%, >=64=14.9% 00:16:11.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.114 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:11.114 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.114 job3: (groupid=0, jobs=1): err= 0: pid=2024629: Thu Apr 18 17:32:35 2024 00:16:11.114 read: IOPS=20, BW=20.5MiB/s (21.5MB/s)(207MiB/10098msec) 00:16:11.114 slat (usec): min=47, max=2127.3k, avg=48307.65, stdev=282844.77 00:16:11.114 clat (msec): min=96, max=9421, avg=1478.59, stdev=2312.29 00:16:11.114 lat (msec): min=100, max=9422, avg=1526.90, stdev=2375.67 00:16:11.114 clat percentiles (msec): 00:16:11.114 | 1.00th=[ 101], 5.00th=[ 146], 10.00th=[ 232], 20.00th=[ 401], 00:16:11.114 | 30.00th=[ 651], 40.00th=[ 776], 50.00th=[ 885], 60.00th=[ 978], 00:16:11.114 | 70.00th=[ 1020], 80.00th=[ 1083], 90.00th=[ 3239], 95.00th=[ 9329], 00:16:11.114 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:16:11.114 | 99.99th=[ 9463] 00:16:11.114 bw ( KiB/s): min=38912, max=124928, per=3.39%, avg=81920.00, stdev=60822.50, samples=2 00:16:11.114 iops : min= 38, max= 122, avg=80.00, stdev=59.40, samples=2 00:16:11.114 lat (msec) : 100=0.48%, 250=10.14%, 500=11.11%, 750=15.46%, 1000=27.05% 00:16:11.114 lat (msec) : 2000=25.12%, >=2000=10.63% 00:16:11.114 cpu : usr=0.01%, sys=0.93%, ctx=183, majf=0, minf=32769 00:16:11.114 IO depths : 1=0.5%, 2=1.0%, 4=1.9%, 8=3.9%, 16=7.7%, 32=15.5%, >=64=69.6% 00:16:11.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.114 complete : 0=0.0%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.2% 00:16:11.114 issued rwts: total=207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.114 job3: (groupid=0, jobs=1): err= 0: pid=2024630: Thu Apr 18 17:32:35 2024 00:16:11.114 read: IOPS=3, BW=3726KiB/s (3815kB/s)(46.0MiB/12643msec) 00:16:11.114 slat (usec): min=441, max=2161.0k, avg=230701.50, stdev=648627.64 00:16:11.114 clat (msec): min=2029, max=12641, avg=9993.36, stdev=3553.28 00:16:11.114 lat (msec): min=4132, max=12642, avg=10224.06, stdev=3364.23 00:16:11.114 clat percentiles (msec): 00:16:11.114 | 1.00th=[ 2022], 5.00th=[ 4144], 10.00th=[ 4144], 20.00th=[ 6275], 00:16:11.114 | 30.00th=[ 6342], 40.00th=[10671], 50.00th=[12550], 60.00th=[12550], 00:16:11.114 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:16:11.114 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:16:11.114 | 99.99th=[12684] 00:16:11.114 lat (msec) : >=2000=100.00% 00:16:11.114 cpu : usr=0.00%, sys=0.30%, ctx=50, majf=0, minf=11777 00:16:11.114 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:16:11.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.114 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:11.114 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.114 job3: (groupid=0, jobs=1): err= 0: pid=2024631: Thu Apr 18 17:32:35 2024 00:16:11.114 read: IOPS=20, BW=20.3MiB/s (21.3MB/s)(204MiB/10061msec) 00:16:11.114 slat (usec): min=54, max=2096.2k, avg=49014.07, stdev=280535.37 00:16:11.114 clat (msec): min=60, max=9402, avg=1416.71, stdev=1745.09 00:16:11.114 lat (msec): min=102, max=9442, avg=1465.72, stdev=1831.65 00:16:11.114 clat percentiles (msec): 00:16:11.114 | 1.00th=[ 104], 5.00th=[ 112], 10.00th=[ 241], 20.00th=[ 368], 00:16:11.114 | 30.00th=[ 523], 40.00th=[ 785], 50.00th=[ 869], 60.00th=[ 1062], 00:16:11.114 | 70.00th=[ 1167], 80.00th=[ 1250], 90.00th=[ 3306], 95.00th=[ 5336], 00:16:11.114 | 99.00th=[ 7483], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:16:11.114 | 99.99th=[ 9463] 00:16:11.114 bw ( KiB/s): min=30720, max=126976, per=3.26%, avg=78848.00, stdev=68063.27, samples=2 00:16:11.114 iops : min= 30, max= 124, avg=77.00, stdev=66.47, samples=2 00:16:11.114 lat (msec) : 100=0.49%, 250=13.24%, 500=9.31%, 750=15.20%, 1000=18.14% 00:16:11.114 lat (msec) : 2000=26.47%, >=2000=17.16% 00:16:11.114 cpu : usr=0.00%, sys=1.11%, ctx=205, majf=0, minf=32769 00:16:11.114 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=3.9%, 16=7.8%, 32=15.7%, >=64=69.1% 00:16:11.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.114 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:16:11.114 issued rwts: total=204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.115 job3: (groupid=0, jobs=1): err= 0: pid=2024632: Thu Apr 18 17:32:35 2024 00:16:11.115 read: IOPS=33, BW=33.6MiB/s (35.3MB/s)(356MiB/10584msec) 00:16:11.115 slat (usec): min=48, max=2031.5k, avg=29541.79, stdev=202769.97 00:16:11.115 clat (msec): min=63, max=8455, avg=3626.42, stdev=2311.00 00:16:11.115 lat (msec): min=560, max=8544, avg=3655.96, stdev=2320.22 00:16:11.115 clat percentiles (msec): 00:16:11.115 | 1.00th=[ 558], 5.00th=[ 600], 10.00th=[ 659], 20.00th=[ 818], 00:16:11.115 | 30.00th=[ 1955], 40.00th=[ 3373], 50.00th=[ 3809], 60.00th=[ 4279], 00:16:11.115 | 70.00th=[ 5604], 80.00th=[ 5805], 90.00th=[ 6477], 95.00th=[ 7349], 00:16:11.115 | 99.00th=[ 8423], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:16:11.115 | 99.99th=[ 8423] 00:16:11.115 bw ( KiB/s): min= 8192, max=104448, per=2.15%, avg=51882.67, stdev=31395.25, samples=9 00:16:11.115 iops : min= 8, max= 102, avg=50.67, stdev=30.66, samples=9 00:16:11.115 lat (msec) : 100=0.28%, 750=15.17%, 1000=8.71%, 2000=8.15%, >=2000=67.70% 00:16:11.115 cpu : usr=0.01%, sys=1.01%, ctx=263, majf=0, minf=32769 00:16:11.115 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.5%, 32=9.0%, >=64=82.3% 00:16:11.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.115 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:11.115 issued rwts: total=356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.115 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.115 job3: (groupid=0, jobs=1): err= 0: pid=2024633: Thu Apr 18 17:32:35 2024 00:16:11.115 read: IOPS=19, BW=19.0MiB/s (20.0MB/s)(192MiB/10085msec) 00:16:11.115 slat (usec): min=103, max=2253.9k, avg=52209.30, stdev=297203.41 00:16:11.115 clat (msec): min=59, max=9584, avg=2709.31, stdev=3511.35 00:16:11.115 lat (msec): min=122, max=9585, avg=2761.52, stdev=3541.79 00:16:11.115 clat percentiles (msec): 00:16:11.115 | 1.00th=[ 123], 5.00th=[ 220], 10.00th=[ 321], 20.00th=[ 477], 00:16:11.115 | 30.00th=[ 659], 40.00th=[ 760], 50.00th=[ 936], 60.00th=[ 1200], 00:16:11.115 | 70.00th=[ 1267], 80.00th=[ 7550], 90.00th=[ 9463], 95.00th=[ 9463], 00:16:11.115 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:16:11.115 | 99.99th=[ 9597] 00:16:11.115 bw ( KiB/s): min=26624, max=104448, per=2.71%, avg=65536.00, stdev=55029.88, samples=2 00:16:11.115 iops : min= 26, max= 102, avg=64.00, stdev=53.74, samples=2 00:16:11.115 lat (msec) : 100=0.52%, 250=7.81%, 500=16.15%, 750=14.58%, 1000=12.50% 00:16:11.115 lat (msec) : 2000=22.92%, >=2000=25.52% 00:16:11.115 cpu : usr=0.00%, sys=1.15%, ctx=218, majf=0, minf=32769 00:16:11.115 IO depths : 1=0.5%, 2=1.0%, 4=2.1%, 8=4.2%, 16=8.3%, 32=16.7%, >=64=67.2% 00:16:11.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.115 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.5% 00:16:11.115 issued rwts: total=192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.115 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.115 job3: (groupid=0, jobs=1): err= 0: pid=2024634: Thu Apr 18 17:32:35 2024 00:16:11.115 read: IOPS=7, BW=7757KiB/s (7943kB/s)(80.0MiB/10561msec) 00:16:11.115 slat (usec): min=571, max=2087.1k, avg=131198.90, stdev=462405.97 00:16:11.115 clat (msec): min=64, max=10456, avg=3598.51, stdev=2547.95 00:16:11.115 lat (msec): min=1541, max=10560, avg=3729.71, stdev=2632.51 00:16:11.115 clat percentiles (msec): 00:16:11.115 | 1.00th=[ 65], 5.00th=[ 1552], 10.00th=[ 1552], 20.00th=[ 1653], 00:16:11.115 | 30.00th=[ 1854], 40.00th=[ 1955], 50.00th=[ 2106], 60.00th=[ 4144], 00:16:11.115 | 70.00th=[ 4279], 80.00th=[ 4279], 90.00th=[ 8490], 95.00th=[ 8557], 00:16:11.115 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:16:11.115 | 99.99th=[10402] 00:16:11.115 lat (msec) : 100=1.25%, 2000=43.75%, >=2000=55.00% 00:16:11.115 cpu : usr=0.00%, sys=0.56%, ctx=121, majf=0, minf=20481 00:16:11.115 IO depths : 1=1.2%, 2=2.5%, 4=5.0%, 8=10.0%, 16=20.0%, 32=40.0%, >=64=21.3% 00:16:11.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.115 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:11.115 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.115 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.115 job3: (groupid=0, jobs=1): err= 0: pid=2024635: Thu Apr 18 17:32:35 2024 00:16:11.115 read: IOPS=15, BW=15.9MiB/s (16.7MB/s)(160MiB/10073msec) 00:16:11.115 slat (usec): min=137, max=2127.3k, avg=62579.90, stdev=319317.21 00:16:11.115 clat (msec): min=59, max=9586, avg=1488.66, stdev=2293.59 00:16:11.115 lat (msec): min=122, max=9670, avg=1551.24, stdev=2382.04 00:16:11.115 clat percentiles (msec): 00:16:11.115 | 1.00th=[ 123], 5.00th=[ 146], 10.00th=[ 241], 20.00th=[ 397], 00:16:11.115 | 30.00th=[ 493], 40.00th=[ 659], 50.00th=[ 785], 60.00th=[ 944], 00:16:11.115 | 70.00th=[ 1083], 80.00th=[ 1217], 90.00th=[ 3473], 95.00th=[ 9463], 00:16:11.115 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:16:11.115 | 99.99th=[ 9597] 00:16:11.115 bw ( KiB/s): min=65536, max=65536, per=2.71%, avg=65536.00, stdev= 0.00, samples=1 00:16:11.115 iops : min= 64, max= 64, avg=64.00, stdev= 0.00, samples=1 00:16:11.115 lat (msec) : 100=0.62%, 250=10.00%, 500=20.00%, 750=12.50%, 1000=18.75% 00:16:11.115 lat (msec) : 2000=26.88%, >=2000=11.25% 00:16:11.115 cpu : usr=0.03%, sys=1.01%, ctx=185, majf=0, minf=32769 00:16:11.115 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=5.0%, 16=10.0%, 32=20.0%, >=64=60.6% 00:16:11.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.115 complete : 0=0.0%, 4=97.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.9% 00:16:11.115 issued rwts: total=160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.115 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.115 job3: (groupid=0, jobs=1): err= 0: pid=2024636: Thu Apr 18 17:32:35 2024 00:16:11.115 read: IOPS=5, BW=6005KiB/s (6149kB/s)(62.0MiB/10573msec) 00:16:11.115 slat (usec): min=448, max=2112.4k, avg=168931.29, stdev=534472.24 00:16:11.115 clat (msec): min=98, max=10483, avg=4729.30, stdev=3397.97 00:16:11.115 lat (msec): min=1763, max=10572, avg=4898.23, stdev=3424.25 00:16:11.115 clat percentiles (msec): 00:16:11.115 | 1.00th=[ 100], 5.00th=[ 1854], 10.00th=[ 1871], 20.00th=[ 1888], 00:16:11.115 | 30.00th=[ 2022], 40.00th=[ 2056], 50.00th=[ 2165], 60.00th=[ 6409], 00:16:11.115 | 70.00th=[ 6477], 80.00th=[ 8658], 90.00th=[10402], 95.00th=[10402], 00:16:11.115 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:16:11.115 | 99.99th=[10537] 00:16:11.115 lat (msec) : 100=1.61%, 2000=27.42%, >=2000=70.97% 00:16:11.115 cpu : usr=0.00%, sys=0.38%, ctx=83, majf=0, minf=15873 00:16:11.115 IO depths : 1=1.6%, 2=3.2%, 4=6.5%, 8=12.9%, 16=25.8%, 32=50.0%, >=64=0.0% 00:16:11.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.115 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:11.115 issued rwts: total=62,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.115 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.115 job3: (groupid=0, jobs=1): err= 0: pid=2024637: Thu Apr 18 17:32:35 2024 00:16:11.115 read: IOPS=35, BW=35.3MiB/s (37.0MB/s)(354MiB/10026msec) 00:16:11.115 slat (usec): min=46, max=2098.7k, avg=28244.48, stdev=213183.65 00:16:11.115 clat (msec): min=24, max=9141, avg=1233.74, stdev=2194.82 00:16:11.115 lat (msec): min=38, max=9143, avg=1261.98, stdev=2233.69 00:16:11.115 clat percentiles (msec): 00:16:11.115 | 1.00th=[ 44], 5.00th=[ 113], 10.00th=[ 180], 20.00th=[ 305], 00:16:11.115 | 30.00th=[ 456], 40.00th=[ 498], 50.00th=[ 514], 60.00th=[ 535], 00:16:11.115 | 70.00th=[ 625], 80.00th=[ 785], 90.00th=[ 2970], 95.00th=[ 7215], 00:16:11.115 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:16:11.115 | 99.99th=[ 9194] 00:16:11.115 bw ( KiB/s): min=212992, max=251904, per=9.62%, avg=232448.00, stdev=27514.94, samples=2 00:16:11.115 iops : min= 208, max= 246, avg=227.00, stdev=26.87, samples=2 00:16:11.115 lat (msec) : 50=1.41%, 100=3.39%, 250=11.86%, 500=25.71%, 750=36.16% 00:16:11.115 lat (msec) : 1000=8.76%, >=2000=12.71% 00:16:11.115 cpu : usr=0.02%, sys=0.94%, ctx=265, majf=0, minf=32769 00:16:11.115 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.5%, 32=9.0%, >=64=82.2% 00:16:11.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.115 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:11.115 issued rwts: total=354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.115 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.115 job4: (groupid=0, jobs=1): err= 0: pid=2024638: Thu Apr 18 17:32:35 2024 00:16:11.115 read: IOPS=34, BW=34.5MiB/s (36.1MB/s)(347MiB/10069msec) 00:16:11.115 slat (usec): min=44, max=2074.6k, avg=28856.33, stdev=198833.01 00:16:11.115 clat (msec): min=53, max=7455, avg=3086.78, stdev=2692.31 00:16:11.115 lat (msec): min=108, max=7461, avg=3115.64, stdev=2696.82 00:16:11.115 clat percentiles (msec): 00:16:11.115 | 1.00th=[ 110], 5.00th=[ 116], 10.00th=[ 300], 20.00th=[ 567], 00:16:11.115 | 30.00th=[ 860], 40.00th=[ 1036], 50.00th=[ 2232], 60.00th=[ 3943], 00:16:11.115 | 70.00th=[ 6342], 80.00th=[ 6611], 90.00th=[ 6745], 95.00th=[ 6812], 00:16:11.115 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 7483], 99.95th=[ 7483], 00:16:11.115 | 99.99th=[ 7483] 00:16:11.115 bw ( KiB/s): min= 8192, max=131072, per=2.65%, avg=64073.14, stdev=51671.79, samples=7 00:16:11.115 iops : min= 8, max= 128, avg=62.57, stdev=50.46, samples=7 00:16:11.115 lat (msec) : 100=0.29%, 250=9.51%, 500=8.93%, 750=8.93%, 1000=5.76% 00:16:11.115 lat (msec) : 2000=16.43%, >=2000=50.14% 00:16:11.115 cpu : usr=0.01%, sys=1.00%, ctx=294, majf=0, minf=32769 00:16:11.115 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.6%, 32=9.2%, >=64=81.8% 00:16:11.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.115 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:11.115 issued rwts: total=347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.115 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.115 job4: (groupid=0, jobs=1): err= 0: pid=2024639: Thu Apr 18 17:32:35 2024 00:16:11.115 read: IOPS=141, BW=141MiB/s (148MB/s)(1436MiB/10156msec) 00:16:11.115 slat (usec): min=48, max=2043.7k, avg=6992.48, stdev=88150.50 00:16:11.115 clat (msec): min=104, max=8722, avg=870.88, stdev=1664.06 00:16:11.115 lat (msec): min=109, max=8727, avg=877.87, stdev=1672.47 00:16:11.115 clat percentiles (msec): 00:16:11.115 | 1.00th=[ 124], 5.00th=[ 142], 10.00th=[ 155], 20.00th=[ 165], 00:16:11.115 | 30.00th=[ 178], 40.00th=[ 190], 50.00th=[ 205], 60.00th=[ 226], 00:16:11.115 | 70.00th=[ 326], 80.00th=[ 684], 90.00th=[ 2802], 95.00th=[ 4933], 00:16:11.115 | 99.00th=[ 7013], 99.50th=[ 7080], 99.90th=[ 8658], 99.95th=[ 8658], 00:16:11.115 | 99.99th=[ 8658] 00:16:11.116 bw ( KiB/s): min=16351, max=696320, per=10.08%, avg=243522.82, stdev=286234.27, samples=11 00:16:11.116 iops : min= 15, max= 680, avg=237.73, stdev=279.60, samples=11 00:16:11.116 lat (msec) : 250=66.02%, 500=7.38%, 750=8.77%, 1000=4.53%, >=2000=13.30% 00:16:11.116 cpu : usr=0.11%, sys=2.06%, ctx=880, majf=0, minf=32769 00:16:11.116 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:16:11.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.116 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.116 issued rwts: total=1436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.116 job4: (groupid=0, jobs=1): err= 0: pid=2024641: Thu Apr 18 17:32:35 2024 00:16:11.116 read: IOPS=24, BW=24.8MiB/s (26.1MB/s)(251MiB/10101msec) 00:16:11.116 slat (usec): min=84, max=2099.5k, avg=39940.40, stdev=239184.62 00:16:11.116 clat (msec): min=72, max=9369, avg=3702.59, stdev=3674.06 00:16:11.116 lat (msec): min=148, max=9385, avg=3742.53, stdev=3684.15 00:16:11.116 clat percentiles (msec): 00:16:11.116 | 1.00th=[ 153], 5.00th=[ 264], 10.00th=[ 506], 20.00th=[ 600], 00:16:11.116 | 30.00th=[ 810], 40.00th=[ 1083], 50.00th=[ 1318], 60.00th=[ 3037], 00:16:11.116 | 70.00th=[ 7483], 80.00th=[ 8926], 90.00th=[ 8926], 95.00th=[ 8926], 00:16:11.116 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:16:11.116 | 99.99th=[ 9329] 00:16:11.116 bw ( KiB/s): min=49152, max=131072, per=3.48%, avg=83968.00, stdev=42319.83, samples=3 00:16:11.116 iops : min= 48, max= 128, avg=82.00, stdev=41.33, samples=3 00:16:11.116 lat (msec) : 100=0.40%, 250=3.59%, 500=5.98%, 750=14.74%, 1000=12.75% 00:16:11.116 lat (msec) : 2000=20.32%, >=2000=42.23% 00:16:11.116 cpu : usr=0.05%, sys=1.13%, ctx=337, majf=0, minf=32769 00:16:11.116 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.4%, 32=12.7%, >=64=74.9% 00:16:11.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.116 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:16:11.116 issued rwts: total=251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.116 job4: (groupid=0, jobs=1): err= 0: pid=2024642: Thu Apr 18 17:32:35 2024 00:16:11.116 read: IOPS=26, BW=26.7MiB/s (28.0MB/s)(270MiB/10095msec) 00:16:11.116 slat (usec): min=50, max=2109.1k, avg=37051.54, stdev=236676.55 00:16:11.116 clat (msec): min=88, max=8956, avg=1688.15, stdev=2233.79 00:16:11.116 lat (msec): min=96, max=8977, avg=1725.20, stdev=2275.79 00:16:11.116 clat percentiles (msec): 00:16:11.116 | 1.00th=[ 127], 5.00th=[ 218], 10.00th=[ 347], 20.00th=[ 558], 00:16:11.116 | 30.00th=[ 718], 40.00th=[ 827], 50.00th=[ 852], 60.00th=[ 902], 00:16:11.116 | 70.00th=[ 927], 80.00th=[ 1028], 90.00th=[ 6946], 95.00th=[ 7080], 00:16:11.116 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:16:11.116 | 99.99th=[ 8926] 00:16:11.116 bw ( KiB/s): min=20480, max=137490, per=4.04%, avg=97712.67, stdev=66895.53, samples=3 00:16:11.116 iops : min= 20, max= 134, avg=95.33, stdev=65.25, samples=3 00:16:11.116 lat (msec) : 100=0.74%, 250=5.93%, 500=12.22%, 750=11.48%, 1000=45.56% 00:16:11.116 lat (msec) : 2000=5.56%, >=2000=18.52% 00:16:11.116 cpu : usr=0.00%, sys=0.76%, ctx=240, majf=0, minf=32769 00:16:11.116 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=5.9%, 32=11.9%, >=64=76.7% 00:16:11.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.116 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:16:11.116 issued rwts: total=270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.116 job4: (groupid=0, jobs=1): err= 0: pid=2024643: Thu Apr 18 17:32:35 2024 00:16:11.116 read: IOPS=14, BW=14.7MiB/s (15.4MB/s)(148MiB/10088msec) 00:16:11.116 slat (usec): min=176, max=2078.2k, avg=67697.44, stdev=323927.04 00:16:11.116 clat (msec): min=67, max=9983, avg=3172.63, stdev=3591.52 00:16:11.116 lat (msec): min=154, max=10006, avg=3240.32, stdev=3626.81 00:16:11.116 clat percentiles (msec): 00:16:11.116 | 1.00th=[ 155], 5.00th=[ 249], 10.00th=[ 342], 20.00th=[ 592], 00:16:11.116 | 30.00th=[ 810], 40.00th=[ 1083], 50.00th=[ 1485], 60.00th=[ 1620], 00:16:11.116 | 70.00th=[ 3708], 80.00th=[ 8087], 90.00th=[ 9866], 95.00th=[ 9866], 00:16:11.116 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:16:11.116 | 99.99th=[10000] 00:16:11.116 bw ( KiB/s): min=40960, max=40960, per=1.70%, avg=40960.00, stdev= 0.00, samples=1 00:16:11.116 iops : min= 40, max= 40, avg=40.00, stdev= 0.00, samples=1 00:16:11.116 lat (msec) : 100=0.68%, 250=4.73%, 500=10.14%, 750=10.81%, 1000=11.49% 00:16:11.116 lat (msec) : 2000=31.76%, >=2000=30.41% 00:16:11.116 cpu : usr=0.00%, sys=1.04%, ctx=248, majf=0, minf=32769 00:16:11.116 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=5.4%, 16=10.8%, 32=21.6%, >=64=57.4% 00:16:11.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.116 complete : 0=0.0%, 4=95.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.5% 00:16:11.116 issued rwts: total=148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.116 job4: (groupid=0, jobs=1): err= 0: pid=2024644: Thu Apr 18 17:32:35 2024 00:16:11.116 read: IOPS=11, BW=11.8MiB/s (12.3MB/s)(125MiB/10622msec) 00:16:11.116 slat (usec): min=456, max=2046.4k, avg=84440.26, stdev=375399.81 00:16:11.116 clat (msec): min=66, max=10621, avg=7822.38, stdev=3085.12 00:16:11.116 lat (msec): min=2030, max=10621, avg=7906.82, stdev=3014.76 00:16:11.116 clat percentiles (msec): 00:16:11.116 | 1.00th=[ 2039], 5.00th=[ 2198], 10.00th=[ 2198], 20.00th=[ 4279], 00:16:11.116 | 30.00th=[ 6477], 40.00th=[ 8356], 50.00th=[ 8557], 60.00th=[10402], 00:16:11.116 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10671], 95.00th=[10671], 00:16:11.116 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:11.116 | 99.99th=[10671] 00:16:11.116 lat (msec) : 100=0.80%, >=2000=99.20% 00:16:11.116 cpu : usr=0.00%, sys=0.74%, ctx=149, majf=0, minf=32001 00:16:11.116 IO depths : 1=0.8%, 2=1.6%, 4=3.2%, 8=6.4%, 16=12.8%, 32=25.6%, >=64=49.6% 00:16:11.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.116 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:11.116 issued rwts: total=125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.116 job4: (groupid=0, jobs=1): err= 0: pid=2024645: Thu Apr 18 17:32:35 2024 00:16:11.116 read: IOPS=230, BW=231MiB/s (242MB/s)(2327MiB/10075msec) 00:16:11.116 slat (usec): min=51, max=2002.1k, avg=4292.36, stdev=55783.15 00:16:11.116 clat (msec): min=72, max=3711, avg=445.49, stdev=719.90 00:16:11.116 lat (msec): min=74, max=3722, avg=449.78, stdev=724.42 00:16:11.116 clat percentiles (msec): 00:16:11.116 | 1.00th=[ 148], 5.00th=[ 150], 10.00th=[ 150], 20.00th=[ 153], 00:16:11.116 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 155], 60.00th=[ 157], 00:16:11.116 | 70.00th=[ 418], 80.00th=[ 477], 90.00th=[ 617], 95.00th=[ 2500], 00:16:11.116 | 99.00th=[ 3574], 99.50th=[ 3641], 99.90th=[ 3675], 99.95th=[ 3708], 00:16:11.116 | 99.99th=[ 3708] 00:16:11.116 bw ( KiB/s): min=10240, max=856064, per=14.34%, avg=346411.00, stdev=318779.85, samples=13 00:16:11.116 iops : min= 10, max= 836, avg=338.23, stdev=311.34, samples=13 00:16:11.116 lat (msec) : 100=0.39%, 250=64.50%, 500=19.98%, 750=6.36%, 1000=0.43% 00:16:11.116 lat (msec) : 2000=2.02%, >=2000=6.32% 00:16:11.116 cpu : usr=0.15%, sys=2.34%, ctx=1820, majf=0, minf=32769 00:16:11.116 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:16:11.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.116 issued rwts: total=2327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.116 job4: (groupid=0, jobs=1): err= 0: pid=2024646: Thu Apr 18 17:32:35 2024 00:16:11.116 read: IOPS=85, BW=85.1MiB/s (89.2MB/s)(859MiB/10099msec) 00:16:11.116 slat (usec): min=44, max=2067.9k, avg=11638.16, stdev=116222.49 00:16:11.116 clat (msec): min=94, max=4641, avg=1061.97, stdev=1139.07 00:16:11.116 lat (msec): min=165, max=4662, avg=1073.61, stdev=1146.96 00:16:11.116 clat percentiles (msec): 00:16:11.116 | 1.00th=[ 171], 5.00th=[ 317], 10.00th=[ 330], 20.00th=[ 401], 00:16:11.116 | 30.00th=[ 443], 40.00th=[ 493], 50.00th=[ 542], 60.00th=[ 625], 00:16:11.116 | 70.00th=[ 810], 80.00th=[ 2567], 90.00th=[ 2869], 95.00th=[ 2937], 00:16:11.116 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4665], 99.95th=[ 4665], 00:16:11.116 | 99.99th=[ 4665] 00:16:11.116 bw ( KiB/s): min=32768, max=321536, per=7.75%, avg=187340.88, stdev=92890.61, samples=8 00:16:11.116 iops : min= 32, max= 314, avg=182.88, stdev=90.70, samples=8 00:16:11.116 lat (msec) : 100=0.12%, 250=3.61%, 500=38.30%, 750=25.03%, 1000=12.81% 00:16:11.116 lat (msec) : >=2000=20.14% 00:16:11.116 cpu : usr=0.08%, sys=1.60%, ctx=662, majf=0, minf=32769 00:16:11.116 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.7% 00:16:11.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.116 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.116 issued rwts: total=859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.116 job4: (groupid=0, jobs=1): err= 0: pid=2024647: Thu Apr 18 17:32:35 2024 00:16:11.116 read: IOPS=32, BW=32.4MiB/s (33.9MB/s)(326MiB/10071msec) 00:16:11.116 slat (usec): min=66, max=2069.7k, avg=30803.57, stdev=215461.39 00:16:11.116 clat (msec): min=26, max=8834, avg=2063.07, stdev=2885.60 00:16:11.116 lat (msec): min=110, max=8863, avg=2093.88, stdev=2909.33 00:16:11.116 clat percentiles (msec): 00:16:11.116 | 1.00th=[ 113], 5.00th=[ 134], 10.00th=[ 300], 20.00th=[ 347], 00:16:11.116 | 30.00th=[ 558], 40.00th=[ 676], 50.00th=[ 751], 60.00th=[ 944], 00:16:11.116 | 70.00th=[ 1003], 80.00th=[ 3037], 90.00th=[ 8658], 95.00th=[ 8658], 00:16:11.116 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:16:11.116 | 99.99th=[ 8792] 00:16:11.116 bw ( KiB/s): min=65536, max=203174, per=5.57%, avg=134626.00, stdev=68820.60, samples=3 00:16:11.116 iops : min= 64, max= 198, avg=131.33, stdev=67.00, samples=3 00:16:11.116 lat (msec) : 50=0.31%, 250=9.51%, 500=19.02%, 750=20.86%, 1000=19.33% 00:16:11.116 lat (msec) : 2000=8.90%, >=2000=22.09% 00:16:11.116 cpu : usr=0.01%, sys=1.08%, ctx=257, majf=0, minf=32769 00:16:11.116 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=4.9%, 32=9.8%, >=64=80.7% 00:16:11.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.116 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:11.116 issued rwts: total=326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.117 job4: (groupid=0, jobs=1): err= 0: pid=2024648: Thu Apr 18 17:32:35 2024 00:16:11.117 read: IOPS=6, BW=6902KiB/s (7068kB/s)(71.0MiB/10533msec) 00:16:11.117 slat (usec): min=404, max=2098.3k, avg=147110.05, stdev=512440.27 00:16:11.117 clat (msec): min=87, max=10530, avg=5768.57, stdev=3454.50 00:16:11.117 lat (msec): min=1889, max=10532, avg=5915.68, stdev=3431.42 00:16:11.117 clat percentiles (msec): 00:16:11.117 | 1.00th=[ 88], 5.00th=[ 1888], 10.00th=[ 1905], 20.00th=[ 2039], 00:16:11.117 | 30.00th=[ 2039], 40.00th=[ 4279], 50.00th=[ 4329], 60.00th=[ 6477], 00:16:11.117 | 70.00th=[ 8658], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:16:11.117 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:16:11.117 | 99.99th=[10537] 00:16:11.117 lat (msec) : 100=1.41%, 2000=9.86%, >=2000=88.73% 00:16:11.117 cpu : usr=0.00%, sys=0.33%, ctx=66, majf=0, minf=18177 00:16:11.117 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.3%, 16=22.5%, 32=45.1%, >=64=11.3% 00:16:11.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.117 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:11.117 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.117 job4: (groupid=0, jobs=1): err= 0: pid=2024649: Thu Apr 18 17:32:35 2024 00:16:11.117 read: IOPS=13, BW=13.7MiB/s (14.4MB/s)(138MiB/10073msec) 00:16:11.117 slat (usec): min=171, max=2087.7k, avg=72459.27, stdev=336187.44 00:16:11.117 clat (msec): min=71, max=9995, avg=2829.23, stdev=3432.70 00:16:11.117 lat (msec): min=73, max=10027, avg=2901.69, stdev=3479.28 00:16:11.117 clat percentiles (msec): 00:16:11.117 | 1.00th=[ 74], 5.00th=[ 155], 10.00th=[ 259], 20.00th=[ 510], 00:16:11.117 | 30.00th=[ 776], 40.00th=[ 953], 50.00th=[ 1234], 60.00th=[ 1401], 00:16:11.117 | 70.00th=[ 1636], 80.00th=[ 5940], 90.00th=[ 9866], 95.00th=[10000], 00:16:11.117 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:16:11.117 | 99.99th=[10000] 00:16:11.117 bw ( KiB/s): min=22528, max=22528, per=0.93%, avg=22528.00, stdev= 0.00, samples=1 00:16:11.117 iops : min= 22, max= 22, avg=22.00, stdev= 0.00, samples=1 00:16:11.117 lat (msec) : 100=4.35%, 250=4.35%, 500=10.14%, 750=10.14%, 1000=13.77% 00:16:11.117 lat (msec) : 2000=28.99%, >=2000=28.26% 00:16:11.117 cpu : usr=0.00%, sys=1.11%, ctx=257, majf=0, minf=32769 00:16:11.117 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=5.8%, 16=11.6%, 32=23.2%, >=64=54.3% 00:16:11.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.117 complete : 0=0.0%, 4=91.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=8.3% 00:16:11.117 issued rwts: total=138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.117 job4: (groupid=0, jobs=1): err= 0: pid=2024650: Thu Apr 18 17:32:35 2024 00:16:11.117 read: IOPS=54, BW=54.7MiB/s (57.3MB/s)(580MiB/10605msec) 00:16:11.117 slat (usec): min=47, max=1955.8k, avg=18140.11, stdev=130544.28 00:16:11.117 clat (msec): min=79, max=4890, avg=1606.27, stdev=1194.14 00:16:11.117 lat (msec): min=646, max=4891, avg=1624.41, stdev=1201.58 00:16:11.117 clat percentiles (msec): 00:16:11.117 | 1.00th=[ 693], 5.00th=[ 718], 10.00th=[ 735], 20.00th=[ 751], 00:16:11.117 | 30.00th=[ 776], 40.00th=[ 810], 50.00th=[ 860], 60.00th=[ 1116], 00:16:11.117 | 70.00th=[ 2056], 80.00th=[ 2635], 90.00th=[ 3608], 95.00th=[ 3742], 00:16:11.117 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:16:11.117 | 99.99th=[ 4866] 00:16:11.117 bw ( KiB/s): min= 1950, max=190464, per=3.84%, avg=92764.60, stdev=72979.40, samples=10 00:16:11.117 iops : min= 1, max= 186, avg=90.50, stdev=71.39, samples=10 00:16:11.117 lat (msec) : 100=0.17%, 750=18.45%, 1000=40.86%, 2000=10.00%, >=2000=30.52% 00:16:11.117 cpu : usr=0.01%, sys=1.29%, ctx=483, majf=0, minf=32769 00:16:11.117 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:16:11.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.117 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:11.117 issued rwts: total=580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.117 job4: (groupid=0, jobs=1): err= 0: pid=2024651: Thu Apr 18 17:32:35 2024 00:16:11.117 read: IOPS=77, BW=77.3MiB/s (81.1MB/s)(779MiB/10076msec) 00:16:11.117 slat (usec): min=41, max=2056.9k, avg=12829.65, stdev=121005.68 00:16:11.117 clat (msec): min=74, max=4871, avg=1226.97, stdev=1260.83 00:16:11.117 lat (msec): min=126, max=4874, avg=1239.80, stdev=1267.58 00:16:11.117 clat percentiles (msec): 00:16:11.117 | 1.00th=[ 128], 5.00th=[ 351], 10.00th=[ 380], 20.00th=[ 472], 00:16:11.117 | 30.00th=[ 550], 40.00th=[ 617], 50.00th=[ 659], 60.00th=[ 701], 00:16:11.117 | 70.00th=[ 768], 80.00th=[ 2668], 90.00th=[ 2836], 95.00th=[ 4732], 00:16:11.117 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:16:11.117 | 99.99th=[ 4866] 00:16:11.117 bw ( KiB/s): min=32768, max=273884, per=6.91%, avg=166843.50, stdev=69880.96, samples=8 00:16:11.117 iops : min= 32, max= 267, avg=162.88, stdev=68.14, samples=8 00:16:11.117 lat (msec) : 100=0.13%, 250=2.57%, 500=20.03%, 750=45.70%, 1000=7.83% 00:16:11.117 lat (msec) : >=2000=23.75% 00:16:11.117 cpu : usr=0.07%, sys=1.67%, ctx=521, majf=0, minf=32769 00:16:11.117 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.9% 00:16:11.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.117 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:11.117 issued rwts: total=779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.117 job5: (groupid=0, jobs=1): err= 0: pid=2024653: Thu Apr 18 17:32:35 2024 00:16:11.117 read: IOPS=68, BW=68.1MiB/s (71.4MB/s)(858MiB/12594msec) 00:16:11.117 slat (usec): min=54, max=2016.4k, avg=12171.76, stdev=131715.09 00:16:11.117 clat (msec): min=277, max=6709, avg=1293.53, stdev=2196.58 00:16:11.117 lat (msec): min=280, max=6712, avg=1305.70, stdev=2205.76 00:16:11.117 clat percentiles (msec): 00:16:11.117 | 1.00th=[ 279], 5.00th=[ 284], 10.00th=[ 288], 20.00th=[ 292], 00:16:11.117 | 30.00th=[ 296], 40.00th=[ 300], 50.00th=[ 300], 60.00th=[ 305], 00:16:11.117 | 70.00th=[ 309], 80.00th=[ 368], 90.00th=[ 6477], 95.00th=[ 6611], 00:16:11.117 | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678], 00:16:11.117 | 99.99th=[ 6678] 00:16:11.117 bw ( KiB/s): min= 1456, max=450560, per=8.85%, avg=213785.14, stdev=214905.99, samples=7 00:16:11.117 iops : min= 1, max= 440, avg=208.71, stdev=209.94, samples=7 00:16:11.117 lat (msec) : 500=81.82%, >=2000=18.18% 00:16:11.117 cpu : usr=0.03%, sys=1.02%, ctx=1405, majf=0, minf=32769 00:16:11.117 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.7% 00:16:11.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.117 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.117 issued rwts: total=858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.117 job5: (groupid=0, jobs=1): err= 0: pid=2024654: Thu Apr 18 17:32:35 2024 00:16:11.117 read: IOPS=6, BW=6372KiB/s (6525kB/s)(66.0MiB/10606msec) 00:16:11.117 slat (usec): min=414, max=2032.5k, avg=158809.12, stdev=516109.79 00:16:11.117 clat (msec): min=124, max=10604, avg=6740.45, stdev=2373.24 00:16:11.117 lat (msec): min=2156, max=10605, avg=6899.26, stdev=2272.22 00:16:11.117 clat percentiles (msec): 00:16:11.117 | 1.00th=[ 125], 5.00th=[ 2232], 10.00th=[ 2265], 20.00th=[ 6208], 00:16:11.117 | 30.00th=[ 6342], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 6477], 00:16:11.117 | 70.00th=[ 6544], 80.00th=[ 8658], 90.00th=[10537], 95.00th=[10537], 00:16:11.117 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:11.117 | 99.99th=[10671] 00:16:11.117 lat (msec) : 250=1.52%, >=2000=98.48% 00:16:11.117 cpu : usr=0.00%, sys=0.37%, ctx=83, majf=0, minf=16897 00:16:11.117 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.1%, 16=24.2%, 32=48.5%, >=64=4.5% 00:16:11.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.117 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:11.117 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.117 job5: (groupid=0, jobs=1): err= 0: pid=2024655: Thu Apr 18 17:32:35 2024 00:16:11.117 read: IOPS=13, BW=13.9MiB/s (14.6MB/s)(175MiB/12610msec) 00:16:11.117 slat (usec): min=56, max=2023.1k, avg=59774.56, stdev=298643.76 00:16:11.117 clat (msec): min=2148, max=10252, avg=7226.63, stdev=2413.14 00:16:11.117 lat (msec): min=3918, max=10257, avg=7286.40, stdev=2392.47 00:16:11.117 clat percentiles (msec): 00:16:11.117 | 1.00th=[ 3910], 5.00th=[ 3943], 10.00th=[ 4010], 20.00th=[ 4111], 00:16:11.117 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8557], 00:16:11.117 | 70.00th=[ 9866], 80.00th=[10000], 90.00th=[10134], 95.00th=[10134], 00:16:11.117 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:16:11.117 | 99.99th=[10268] 00:16:11.117 bw ( KiB/s): min= 2003, max=53248, per=1.02%, avg=24564.75, stdev=26505.15, samples=4 00:16:11.117 iops : min= 1, max= 52, avg=23.75, stdev=26.16, samples=4 00:16:11.117 lat (msec) : >=2000=100.00% 00:16:11.117 cpu : usr=0.02%, sys=0.65%, ctx=244, majf=0, minf=32769 00:16:11.117 IO depths : 1=0.6%, 2=1.1%, 4=2.3%, 8=4.6%, 16=9.1%, 32=18.3%, >=64=64.0% 00:16:11.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.117 complete : 0=0.0%, 4=98.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.0% 00:16:11.117 issued rwts: total=175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.117 job5: (groupid=0, jobs=1): err= 0: pid=2024656: Thu Apr 18 17:32:35 2024 00:16:11.117 read: IOPS=136, BW=136MiB/s (143MB/s)(1432MiB/10521msec) 00:16:11.117 slat (usec): min=55, max=2048.1k, avg=7294.21, stdev=88599.32 00:16:11.117 clat (msec): min=67, max=2920, avg=775.45, stdev=848.83 00:16:11.117 lat (msec): min=136, max=2922, avg=782.74, stdev=851.83 00:16:11.117 clat percentiles (msec): 00:16:11.117 | 1.00th=[ 136], 5.00th=[ 142], 10.00th=[ 251], 20.00th=[ 313], 00:16:11.117 | 30.00th=[ 334], 40.00th=[ 355], 50.00th=[ 380], 60.00th=[ 405], 00:16:11.117 | 70.00th=[ 514], 80.00th=[ 726], 90.00th=[ 2433], 95.00th=[ 2735], 00:16:11.117 | 99.00th=[ 2903], 99.50th=[ 2903], 99.90th=[ 2937], 99.95th=[ 2937], 00:16:11.117 | 99.99th=[ 2937] 00:16:11.117 bw ( KiB/s): min=43008, max=466944, per=11.05%, avg=267009.70, stdev=129471.11, samples=10 00:16:11.117 iops : min= 42, max= 456, avg=260.70, stdev=126.45, samples=10 00:16:11.118 lat (msec) : 100=0.07%, 250=9.85%, 500=58.59%, 750=11.52%, 1000=1.05% 00:16:11.118 lat (msec) : 2000=0.14%, >=2000=18.78% 00:16:11.118 cpu : usr=0.10%, sys=1.85%, ctx=917, majf=0, minf=32769 00:16:11.118 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:16:11.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.118 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.118 issued rwts: total=1432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.118 job5: (groupid=0, jobs=1): err= 0: pid=2024657: Thu Apr 18 17:32:35 2024 00:16:11.118 read: IOPS=134, BW=134MiB/s (141MB/s)(1414MiB/10542msec) 00:16:11.118 slat (usec): min=37, max=1986.2k, avg=7395.70, stdev=89887.76 00:16:11.118 clat (msec): min=72, max=2853, avg=735.65, stdev=845.39 00:16:11.118 lat (msec): min=116, max=2855, avg=743.05, stdev=849.60 00:16:11.118 clat percentiles (msec): 00:16:11.118 | 1.00th=[ 129], 5.00th=[ 157], 10.00th=[ 190], 20.00th=[ 234], 00:16:11.118 | 30.00th=[ 262], 40.00th=[ 300], 50.00th=[ 372], 60.00th=[ 435], 00:16:11.118 | 70.00th=[ 514], 80.00th=[ 718], 90.00th=[ 2400], 95.00th=[ 2769], 00:16:11.118 | 99.00th=[ 2836], 99.50th=[ 2836], 99.90th=[ 2836], 99.95th=[ 2869], 00:16:11.118 | 99.99th=[ 2869] 00:16:11.118 bw ( KiB/s): min= 1973, max=583680, per=9.92%, avg=239661.73, stdev=195560.00, samples=11 00:16:11.118 iops : min= 1, max= 570, avg=233.91, stdev=191.08, samples=11 00:16:11.118 lat (msec) : 100=0.07%, 250=25.18%, 500=42.57%, 750=12.52%, 1000=1.70% 00:16:11.118 lat (msec) : >=2000=17.96% 00:16:11.118 cpu : usr=0.12%, sys=1.75%, ctx=1036, majf=0, minf=32769 00:16:11.118 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:16:11.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.118 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.118 issued rwts: total=1414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.118 job5: (groupid=0, jobs=1): err= 0: pid=2024658: Thu Apr 18 17:32:35 2024 00:16:11.118 read: IOPS=22, BW=22.4MiB/s (23.5MB/s)(225MiB/10041msec) 00:16:11.118 slat (usec): min=53, max=2112.2k, avg=44468.31, stdev=265172.00 00:16:11.118 clat (msec): min=33, max=9490, avg=1646.95, stdev=2766.46 00:16:11.118 lat (msec): min=45, max=9493, avg=1691.42, stdev=2814.78 00:16:11.118 clat percentiles (msec): 00:16:11.118 | 1.00th=[ 46], 5.00th=[ 47], 10.00th=[ 105], 20.00th=[ 230], 00:16:11.118 | 30.00th=[ 372], 40.00th=[ 542], 50.00th=[ 718], 60.00th=[ 793], 00:16:11.118 | 70.00th=[ 911], 80.00th=[ 1045], 90.00th=[ 7684], 95.00th=[ 9463], 00:16:11.118 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:16:11.118 | 99.99th=[ 9463] 00:16:11.118 bw ( KiB/s): min=30720, max=169984, per=4.15%, avg=100352.00, stdev=98474.52, samples=2 00:16:11.118 iops : min= 30, max= 166, avg=98.00, stdev=96.17, samples=2 00:16:11.118 lat (msec) : 50=7.11%, 100=2.67%, 250=11.11%, 500=15.56%, 750=18.22% 00:16:11.118 lat (msec) : 1000=21.78%, 2000=9.33%, >=2000=14.22% 00:16:11.118 cpu : usr=0.02%, sys=0.99%, ctx=274, majf=0, minf=32769 00:16:11.118 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.1%, 32=14.2%, >=64=72.0% 00:16:11.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.118 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:16:11.118 issued rwts: total=225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.118 job5: (groupid=0, jobs=1): err= 0: pid=2024659: Thu Apr 18 17:32:35 2024 00:16:11.118 read: IOPS=24, BW=24.8MiB/s (26.0MB/s)(249MiB/10039msec) 00:16:11.118 slat (usec): min=51, max=2053.6k, avg=40169.33, stdev=238796.88 00:16:11.118 clat (msec): min=34, max=9353, avg=2271.13, stdev=3106.29 00:16:11.118 lat (msec): min=90, max=9368, avg=2311.30, stdev=3136.05 00:16:11.118 clat percentiles (msec): 00:16:11.118 | 1.00th=[ 93], 5.00th=[ 105], 10.00th=[ 159], 20.00th=[ 245], 00:16:11.118 | 30.00th=[ 518], 40.00th=[ 709], 50.00th=[ 785], 60.00th=[ 1070], 00:16:11.118 | 70.00th=[ 1368], 80.00th=[ 3339], 90.00th=[ 9194], 95.00th=[ 9194], 00:16:11.118 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9329], 99.95th=[ 9329], 00:16:11.118 | 99.99th=[ 9329] 00:16:11.118 bw ( KiB/s): min=96256, max=153600, per=5.17%, avg=124928.00, stdev=40548.33, samples=2 00:16:11.118 iops : min= 94, max= 150, avg=122.00, stdev=39.60, samples=2 00:16:11.118 lat (msec) : 50=0.40%, 100=1.61%, 250=18.07%, 500=8.84%, 750=14.06% 00:16:11.118 lat (msec) : 1000=16.47%, 2000=15.26%, >=2000=25.30% 00:16:11.118 cpu : usr=0.03%, sys=1.05%, ctx=286, majf=0, minf=32769 00:16:11.118 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.4%, 32=12.9%, >=64=74.7% 00:16:11.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.118 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:16:11.118 issued rwts: total=249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.118 job5: (groupid=0, jobs=1): err= 0: pid=2024660: Thu Apr 18 17:32:35 2024 00:16:11.118 read: IOPS=141, BW=141MiB/s (148MB/s)(1421MiB/10071msec) 00:16:11.118 slat (usec): min=43, max=2121.5k, avg=7047.14, stdev=68373.34 00:16:11.118 clat (msec): min=46, max=3306, avg=864.37, stdev=898.28 00:16:11.118 lat (msec): min=83, max=3335, avg=871.42, stdev=902.02 00:16:11.118 clat percentiles (msec): 00:16:11.118 | 1.00th=[ 140], 5.00th=[ 159], 10.00th=[ 167], 20.00th=[ 253], 00:16:11.118 | 30.00th=[ 351], 40.00th=[ 368], 50.00th=[ 518], 60.00th=[ 617], 00:16:11.118 | 70.00th=[ 743], 80.00th=[ 1036], 90.00th=[ 2735], 95.00th=[ 2970], 00:16:11.118 | 99.00th=[ 3138], 99.50th=[ 3171], 99.90th=[ 3205], 99.95th=[ 3306], 00:16:11.118 | 99.99th=[ 3306] 00:16:11.118 bw ( KiB/s): min=18432, max=595968, per=7.83%, avg=189157.07, stdev=151372.42, samples=14 00:16:11.118 iops : min= 18, max= 582, avg=184.71, stdev=147.83, samples=14 00:16:11.118 lat (msec) : 50=0.07%, 100=0.56%, 250=18.44%, 500=29.28%, 750=22.38% 00:16:11.118 lat (msec) : 1000=8.23%, 2000=3.17%, >=2000=17.87% 00:16:11.118 cpu : usr=0.13%, sys=1.83%, ctx=957, majf=0, minf=32769 00:16:11.118 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.6% 00:16:11.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.118 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.118 issued rwts: total=1421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.118 job5: (groupid=0, jobs=1): err= 0: pid=2024661: Thu Apr 18 17:32:35 2024 00:16:11.118 read: IOPS=201, BW=201MiB/s (211MB/s)(2146MiB/10667msec) 00:16:11.118 slat (usec): min=38, max=1757.2k, avg=4903.72, stdev=49597.18 00:16:11.118 clat (msec): min=84, max=2869, avg=528.23, stdev=531.46 00:16:11.118 lat (msec): min=84, max=2871, avg=533.13, stdev=534.96 00:16:11.118 clat percentiles (msec): 00:16:11.118 | 1.00th=[ 96], 5.00th=[ 116], 10.00th=[ 155], 20.00th=[ 222], 00:16:11.118 | 30.00th=[ 268], 40.00th=[ 338], 50.00th=[ 384], 60.00th=[ 447], 00:16:11.118 | 70.00th=[ 542], 80.00th=[ 609], 90.00th=[ 760], 95.00th=[ 1972], 00:16:11.118 | 99.00th=[ 2836], 99.50th=[ 2869], 99.90th=[ 2869], 99.95th=[ 2869], 00:16:11.118 | 99.99th=[ 2869] 00:16:11.118 bw ( KiB/s): min= 1855, max=618496, per=11.41%, avg=275627.67, stdev=192271.84, samples=15 00:16:11.118 iops : min= 1, max= 604, avg=269.00, stdev=187.98, samples=15 00:16:11.118 lat (msec) : 100=2.00%, 250=23.11%, 500=38.12%, 750=26.61%, 1000=2.52% 00:16:11.118 lat (msec) : 2000=2.66%, >=2000=4.99% 00:16:11.118 cpu : usr=0.15%, sys=2.77%, ctx=1411, majf=0, minf=32769 00:16:11.118 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:16:11.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.118 issued rwts: total=2146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.118 job5: (groupid=0, jobs=1): err= 0: pid=2024662: Thu Apr 18 17:32:35 2024 00:16:11.118 read: IOPS=26, BW=26.7MiB/s (28.0MB/s)(282MiB/10567msec) 00:16:11.118 slat (usec): min=54, max=2149.8k, avg=37023.66, stdev=241328.29 00:16:11.118 clat (msec): min=124, max=8387, avg=4513.25, stdev=1556.66 00:16:11.118 lat (msec): min=1784, max=8387, avg=4550.28, stdev=1551.62 00:16:11.118 clat percentiles (msec): 00:16:11.118 | 1.00th=[ 1787], 5.00th=[ 1955], 10.00th=[ 1972], 20.00th=[ 3675], 00:16:11.118 | 30.00th=[ 3742], 40.00th=[ 4212], 50.00th=[ 4279], 60.00th=[ 4463], 00:16:11.118 | 70.00th=[ 5604], 80.00th=[ 6141], 90.00th=[ 6409], 95.00th=[ 6477], 00:16:11.118 | 99.00th=[ 8356], 99.50th=[ 8356], 99.90th=[ 8356], 99.95th=[ 8356], 00:16:11.118 | 99.99th=[ 8356] 00:16:11.118 bw ( KiB/s): min=16384, max=124928, per=2.61%, avg=63078.40, stdev=44621.03, samples=5 00:16:11.118 iops : min= 16, max= 122, avg=61.60, stdev=43.58, samples=5 00:16:11.118 lat (msec) : 250=0.35%, 2000=10.99%, >=2000=88.65% 00:16:11.118 cpu : usr=0.01%, sys=0.85%, ctx=185, majf=0, minf=32769 00:16:11.118 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.7%, 32=11.3%, >=64=77.7% 00:16:11.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.118 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:16:11.118 issued rwts: total=282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.119 job5: (groupid=0, jobs=1): err= 0: pid=2024663: Thu Apr 18 17:32:35 2024 00:16:11.119 read: IOPS=161, BW=162MiB/s (170MB/s)(1715MiB/10604msec) 00:16:11.119 slat (usec): min=53, max=2027.8k, avg=6100.70, stdev=89511.50 00:16:11.119 clat (msec): min=95, max=6729, avg=644.10, stdev=1206.39 00:16:11.119 lat (msec): min=97, max=8515, avg=650.20, stdev=1218.65 00:16:11.119 clat percentiles (msec): 00:16:11.119 | 1.00th=[ 110], 5.00th=[ 124], 10.00th=[ 125], 20.00th=[ 125], 00:16:11.119 | 30.00th=[ 126], 40.00th=[ 127], 50.00th=[ 132], 60.00th=[ 133], 00:16:11.119 | 70.00th=[ 136], 80.00th=[ 334], 90.00th=[ 2433], 95.00th=[ 4111], 00:16:11.119 | 99.00th=[ 4597], 99.50th=[ 6678], 99.90th=[ 6745], 99.95th=[ 6745], 00:16:11.119 | 99.99th=[ 6745] 00:16:11.119 bw ( KiB/s): min= 1988, max=942080, per=13.46%, avg=325216.40, stdev=406960.30, samples=10 00:16:11.119 iops : min= 1, max= 920, avg=317.50, stdev=397.51, samples=10 00:16:11.119 lat (msec) : 100=0.29%, 250=78.02%, 500=3.21%, 750=0.87%, 2000=3.27% 00:16:11.119 lat (msec) : >=2000=14.34% 00:16:11.119 cpu : usr=0.04%, sys=1.88%, ctx=1265, majf=0, minf=32769 00:16:11.119 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:16:11.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.119 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:11.119 issued rwts: total=1715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.119 job5: (groupid=0, jobs=1): err= 0: pid=2024664: Thu Apr 18 17:32:35 2024 00:16:11.119 read: IOPS=23, BW=23.8MiB/s (24.9MB/s)(240MiB/10105msec) 00:16:11.119 slat (usec): min=64, max=2042.4k, avg=41669.54, stdev=241499.42 00:16:11.119 clat (msec): min=101, max=8958, avg=4159.17, stdev=3544.02 00:16:11.119 lat (msec): min=106, max=8974, avg=4200.84, stdev=3550.96 00:16:11.119 clat percentiles (msec): 00:16:11.119 | 1.00th=[ 132], 5.00th=[ 330], 10.00th=[ 464], 20.00th=[ 810], 00:16:11.119 | 30.00th=[ 1267], 40.00th=[ 1485], 50.00th=[ 1653], 60.00th=[ 5403], 00:16:11.119 | 70.00th=[ 8490], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[ 8792], 00:16:11.119 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:16:11.119 | 99.99th=[ 8926] 00:16:11.119 bw ( KiB/s): min= 8192, max=104448, per=2.39%, avg=57856.00, stdev=41040.99, samples=4 00:16:11.119 iops : min= 8, max= 102, avg=56.50, stdev=40.08, samples=4 00:16:11.119 lat (msec) : 250=4.17%, 500=7.50%, 750=6.25%, 1000=4.58%, 2000=29.17% 00:16:11.119 lat (msec) : >=2000=48.33% 00:16:11.119 cpu : usr=0.00%, sys=1.09%, ctx=303, majf=0, minf=32769 00:16:11.119 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.3%, 16=6.7%, 32=13.3%, >=64=73.8% 00:16:11.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.119 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:16:11.119 issued rwts: total=240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.119 job5: (groupid=0, jobs=1): err= 0: pid=2024665: Thu Apr 18 17:32:35 2024 00:16:11.119 read: IOPS=62, BW=62.3MiB/s (65.3MB/s)(785MiB/12598msec) 00:16:11.119 slat (usec): min=55, max=2016.2k, avg=13309.39, stdev=137664.17 00:16:11.119 clat (msec): min=313, max=6726, avg=1458.98, stdev=2288.80 00:16:11.119 lat (msec): min=317, max=6729, avg=1472.29, stdev=2297.53 00:16:11.119 clat percentiles (msec): 00:16:11.119 | 1.00th=[ 321], 5.00th=[ 330], 10.00th=[ 330], 20.00th=[ 334], 00:16:11.119 | 30.00th=[ 338], 40.00th=[ 338], 50.00th=[ 342], 60.00th=[ 347], 00:16:11.119 | 70.00th=[ 355], 80.00th=[ 2467], 90.00th=[ 6544], 95.00th=[ 6611], 00:16:11.119 | 99.00th=[ 6678], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:16:11.119 | 99.99th=[ 6745] 00:16:11.119 bw ( KiB/s): min= 2048, max=385024, per=7.97%, avg=192512.00, stdev=180479.82, samples=7 00:16:11.119 iops : min= 2, max= 376, avg=188.00, stdev=176.25, samples=7 00:16:11.119 lat (msec) : 500=79.49%, >=2000=20.51% 00:16:11.119 cpu : usr=0.03%, sys=0.99%, ctx=1411, majf=0, minf=32769 00:16:11.119 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:16:11.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:11.119 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:11.119 issued rwts: total=785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:11.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:11.119 00:16:11.119 Run status group 0 (all jobs): 00:16:11.119 READ: bw=2360MiB/s (2474MB/s), 646KiB/s-231MiB/s (661kB/s-243MB/s), io=29.3GiB (31.5GB), run=10026-12734msec 00:16:11.119 00:16:11.119 Disk stats (read/write): 00:16:11.119 nvme0n1: ios=6829/0, merge=0/0, ticks=4566977/0, in_queue=4566977, util=98.51% 00:16:11.119 nvme1n1: ios=36819/0, merge=0/0, ticks=7428297/0, in_queue=7428297, util=98.78% 00:16:11.119 nvme2n1: ios=13025/0, merge=0/0, ticks=8562723/0, in_queue=8562723, util=99.01% 00:16:11.119 nvme3n1: ios=32850/0, merge=0/0, ticks=11313206/0, in_queue=11313206, util=98.83% 00:16:11.119 nvme4n1: ios=61140/0, merge=0/0, ticks=11926336/0, in_queue=11926336, util=99.11% 00:16:11.119 nvme5n1: ios=88053/0, merge=0/0, ticks=10573169/0, in_queue=10573169, util=99.28% 00:16:11.119 17:32:36 -- target/srq_overwhelm.sh@38 -- # sync 00:16:11.119 17:32:36 -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:16:11.119 17:32:36 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:11.119 17:32:36 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:16:12.053 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.053 17:32:38 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:16:12.053 17:32:38 -- common/autotest_common.sh@1205 -- # local i=0 00:16:12.053 17:32:38 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:12.053 17:32:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000000 00:16:12.053 17:32:38 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:12.053 17:32:38 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000000 00:16:12.053 17:32:38 -- common/autotest_common.sh@1217 -- # return 0 00:16:12.053 17:32:38 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:12.053 17:32:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:12.053 17:32:38 -- common/autotest_common.sh@10 -- # set +x 00:16:12.053 17:32:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:12.053 17:32:38 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:12.054 17:32:38 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:14.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.590 17:32:40 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:16:14.590 17:32:40 -- common/autotest_common.sh@1205 -- # local i=0 00:16:14.590 17:32:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:14.590 17:32:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000001 00:16:14.590 17:32:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:14.590 17:32:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000001 00:16:14.590 17:32:40 -- common/autotest_common.sh@1217 -- # return 0 00:16:14.590 17:32:40 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.590 17:32:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.590 17:32:40 -- common/autotest_common.sh@10 -- # set +x 00:16:14.590 17:32:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.590 17:32:40 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:14.591 17:32:40 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:16.490 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:16.490 17:32:42 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:16:16.490 17:32:42 -- common/autotest_common.sh@1205 -- # local i=0 00:16:16.490 17:32:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:16.490 17:32:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000002 00:16:16.490 17:32:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:16.490 17:32:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000002 00:16:16.490 17:32:42 -- common/autotest_common.sh@1217 -- # return 0 00:16:16.490 17:32:42 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:16.490 17:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.490 17:32:42 -- common/autotest_common.sh@10 -- # set +x 00:16:16.490 17:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.490 17:32:42 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:16.490 17:32:42 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:19.024 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:19.024 17:32:45 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:16:19.024 17:32:45 -- common/autotest_common.sh@1205 -- # local i=0 00:16:19.024 17:32:45 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:19.024 17:32:45 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000003 00:16:19.024 17:32:45 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:19.024 17:32:45 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000003 00:16:19.024 17:32:45 -- common/autotest_common.sh@1217 -- # return 0 00:16:19.024 17:32:45 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:19.024 17:32:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.024 17:32:45 -- common/autotest_common.sh@10 -- # set +x 00:16:19.024 17:32:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.024 17:32:45 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:19.024 17:32:45 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:21.557 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:21.557 17:32:47 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:16:21.557 17:32:47 -- common/autotest_common.sh@1205 -- # local i=0 00:16:21.557 17:32:47 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:21.557 17:32:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000004 00:16:21.557 17:32:47 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:21.557 17:32:47 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000004 00:16:21.557 17:32:47 -- common/autotest_common.sh@1217 -- # return 0 00:16:21.557 17:32:47 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:21.557 17:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.557 17:32:47 -- common/autotest_common.sh@10 -- # set +x 00:16:21.557 17:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.557 17:32:47 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:21.557 17:32:47 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:23.520 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:23.520 17:32:49 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:16:23.520 17:32:49 -- common/autotest_common.sh@1205 -- # local i=0 00:16:23.520 17:32:49 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:23.520 17:32:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000005 00:16:23.520 17:32:49 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:23.520 17:32:49 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000005 00:16:23.520 17:32:49 -- common/autotest_common.sh@1217 -- # return 0 00:16:23.520 17:32:49 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:23.520 17:32:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.520 17:32:49 -- common/autotest_common.sh@10 -- # set +x 00:16:23.520 17:32:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.520 17:32:49 -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:23.520 17:32:49 -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:16:23.520 17:32:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:23.520 17:32:49 -- nvmf/common.sh@117 -- # sync 00:16:23.520 17:32:49 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:23.520 17:32:49 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:23.520 17:32:49 -- nvmf/common.sh@120 -- # set +e 00:16:23.520 17:32:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:23.520 17:32:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:23.520 rmmod nvme_rdma 00:16:23.520 rmmod nvme_fabrics 00:16:23.520 17:32:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:23.520 17:32:49 -- nvmf/common.sh@124 -- # set -e 00:16:23.520 17:32:49 -- nvmf/common.sh@125 -- # return 0 00:16:23.520 17:32:49 -- nvmf/common.sh@478 -- # '[' -n 2021699 ']' 00:16:23.520 17:32:49 -- nvmf/common.sh@479 -- # killprocess 2021699 00:16:23.520 17:32:49 -- common/autotest_common.sh@936 -- # '[' -z 2021699 ']' 00:16:23.520 17:32:49 -- common/autotest_common.sh@940 -- # kill -0 2021699 00:16:23.520 17:32:49 -- common/autotest_common.sh@941 -- # uname 00:16:23.520 17:32:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:23.520 17:32:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2021699 00:16:23.520 17:32:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:23.520 17:32:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:23.520 17:32:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2021699' 00:16:23.520 killing process with pid 2021699 00:16:23.520 17:32:49 -- common/autotest_common.sh@955 -- # kill 2021699 00:16:23.520 17:32:49 -- common/autotest_common.sh@960 -- # wait 2021699 00:16:23.779 17:32:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:23.779 17:32:50 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:16:23.779 00:16:23.779 real 0m51.683s 00:16:23.779 user 3m16.793s 00:16:23.779 sys 0m10.917s 00:16:23.779 17:32:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:23.779 17:32:50 -- common/autotest_common.sh@10 -- # set +x 00:16:23.779 ************************************ 00:16:23.779 END TEST nvmf_srq_overwhelm 00:16:23.779 ************************************ 00:16:24.038 17:32:50 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:16:24.038 17:32:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:24.038 17:32:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.038 17:32:50 -- common/autotest_common.sh@10 -- # set +x 00:16:24.038 ************************************ 00:16:24.038 START TEST nvmf_shutdown 00:16:24.038 ************************************ 00:16:24.038 17:32:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:16:24.038 * Looking for test storage... 00:16:24.038 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:24.038 17:32:50 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.038 17:32:50 -- nvmf/common.sh@7 -- # uname -s 00:16:24.038 17:32:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.038 17:32:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.038 17:32:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.038 17:32:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.038 17:32:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.038 17:32:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.038 17:32:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.038 17:32:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.038 17:32:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.038 17:32:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.038 17:32:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.038 17:32:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.038 17:32:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.038 17:32:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.038 17:32:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.038 17:32:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.038 17:32:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:24.038 17:32:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.038 17:32:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.038 17:32:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.038 17:32:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.038 17:32:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.038 17:32:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.038 17:32:50 -- paths/export.sh@5 -- # export PATH 00:16:24.038 17:32:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.038 17:32:50 -- nvmf/common.sh@47 -- # : 0 00:16:24.038 17:32:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.038 17:32:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.038 17:32:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.038 17:32:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.038 17:32:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.038 17:32:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.038 17:32:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.038 17:32:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.038 17:32:50 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:24.038 17:32:50 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:24.038 17:32:50 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:16:24.038 17:32:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:24.038 17:32:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.038 17:32:50 -- common/autotest_common.sh@10 -- # set +x 00:16:24.296 ************************************ 00:16:24.296 START TEST nvmf_shutdown_tc1 00:16:24.296 ************************************ 00:16:24.296 17:32:50 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:16:24.296 17:32:50 -- target/shutdown.sh@74 -- # starttarget 00:16:24.296 17:32:50 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:24.296 17:32:50 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:16:24.296 17:32:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.296 17:32:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:24.296 17:32:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:24.296 17:32:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:24.296 17:32:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.296 17:32:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.296 17:32:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.296 17:32:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:24.296 17:32:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:24.296 17:32:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:24.296 17:32:50 -- common/autotest_common.sh@10 -- # set +x 00:16:26.198 17:32:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:26.198 17:32:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:26.198 17:32:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:26.198 17:32:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:26.198 17:32:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:26.198 17:32:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:26.198 17:32:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:26.198 17:32:52 -- nvmf/common.sh@295 -- # net_devs=() 00:16:26.198 17:32:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:26.198 17:32:52 -- nvmf/common.sh@296 -- # e810=() 00:16:26.198 17:32:52 -- nvmf/common.sh@296 -- # local -ga e810 00:16:26.198 17:32:52 -- nvmf/common.sh@297 -- # x722=() 00:16:26.198 17:32:52 -- nvmf/common.sh@297 -- # local -ga x722 00:16:26.198 17:32:52 -- nvmf/common.sh@298 -- # mlx=() 00:16:26.198 17:32:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:26.198 17:32:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:26.198 17:32:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:26.198 17:32:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:26.198 17:32:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:26.198 17:32:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:26.198 17:32:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:26.198 17:32:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:26.198 17:32:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:26.198 17:32:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:26.198 17:32:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:26.198 17:32:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:26.198 17:32:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:26.198 17:32:52 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:26.198 17:32:52 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:26.198 17:32:52 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:26.198 17:32:52 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:26.198 17:32:52 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:26.198 17:32:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:26.198 17:32:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:26.198 17:32:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:16:26.199 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:16:26.199 17:32:52 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:26.199 17:32:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:26.199 17:32:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:16:26.199 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:16:26.199 17:32:52 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:26.199 17:32:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:26.199 17:32:52 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:26.199 17:32:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.199 17:32:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:26.199 17:32:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.199 17:32:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:16:26.199 Found net devices under 0000:09:00.0: mlx_0_0 00:16:26.199 17:32:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.199 17:32:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:26.199 17:32:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.199 17:32:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:26.199 17:32:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.199 17:32:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:16:26.199 Found net devices under 0000:09:00.1: mlx_0_1 00:16:26.199 17:32:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.199 17:32:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:26.199 17:32:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:26.199 17:32:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@409 -- # rdma_device_init 00:16:26.199 17:32:52 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:16:26.199 17:32:52 -- nvmf/common.sh@58 -- # uname 00:16:26.199 17:32:52 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:26.199 17:32:52 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:26.199 17:32:52 -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:26.199 17:32:52 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:26.199 17:32:52 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:26.199 17:32:52 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:26.199 17:32:52 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:26.199 17:32:52 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:26.199 17:32:52 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:16:26.199 17:32:52 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:26.199 17:32:52 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:26.199 17:32:52 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:26.199 17:32:52 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:26.199 17:32:52 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:26.199 17:32:52 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:26.199 17:32:52 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:26.199 17:32:52 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:26.199 17:32:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:26.199 17:32:52 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:26.199 17:32:52 -- nvmf/common.sh@105 -- # continue 2 00:16:26.199 17:32:52 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:26.199 17:32:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:26.199 17:32:52 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:26.199 17:32:52 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:26.199 17:32:52 -- nvmf/common.sh@105 -- # continue 2 00:16:26.199 17:32:52 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:26.199 17:32:52 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:26.199 17:32:52 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:26.199 17:32:52 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:26.199 17:32:52 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:26.199 17:32:52 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:26.199 17:32:52 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:26.199 17:32:52 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:26.199 82: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:26.199 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:16:26.199 altname enp9s0f0np0 00:16:26.199 inet 192.168.100.8/24 scope global mlx_0_0 00:16:26.199 valid_lft forever preferred_lft forever 00:16:26.199 17:32:52 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:26.199 17:32:52 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:26.199 17:32:52 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:26.199 17:32:52 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:26.199 17:32:52 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:26.199 17:32:52 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:26.199 17:32:52 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:26.199 17:32:52 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:26.199 83: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:26.199 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:16:26.199 altname enp9s0f1np1 00:16:26.199 inet 192.168.100.9/24 scope global mlx_0_1 00:16:26.199 valid_lft forever preferred_lft forever 00:16:26.199 17:32:52 -- nvmf/common.sh@411 -- # return 0 00:16:26.199 17:32:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:26.199 17:32:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:26.199 17:32:52 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:16:26.199 17:32:52 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:26.199 17:32:52 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:26.199 17:32:52 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:26.199 17:32:52 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:26.199 17:32:52 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:26.199 17:32:52 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:26.199 17:32:52 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:26.199 17:32:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:26.199 17:32:52 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:26.199 17:32:52 -- nvmf/common.sh@105 -- # continue 2 00:16:26.199 17:32:52 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:26.199 17:32:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:26.199 17:32:52 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:26.199 17:32:52 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:26.199 17:32:52 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:26.199 17:32:52 -- nvmf/common.sh@105 -- # continue 2 00:16:26.199 17:32:52 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:26.199 17:32:52 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:26.199 17:32:52 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:26.199 17:32:52 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:26.199 17:32:52 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:26.199 17:32:52 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:26.199 17:32:52 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:26.199 17:32:52 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:26.199 17:32:52 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:26.199 17:32:52 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:26.199 17:32:52 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:26.199 17:32:52 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:26.199 17:32:52 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:16:26.199 192.168.100.9' 00:16:26.199 17:32:52 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:26.199 192.168.100.9' 00:16:26.199 17:32:52 -- nvmf/common.sh@446 -- # head -n 1 00:16:26.199 17:32:52 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:26.199 17:32:52 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:16:26.199 192.168.100.9' 00:16:26.199 17:32:52 -- nvmf/common.sh@447 -- # tail -n +2 00:16:26.199 17:32:52 -- nvmf/common.sh@447 -- # head -n 1 00:16:26.199 17:32:52 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:26.199 17:32:52 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:16:26.199 17:32:52 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:26.199 17:32:52 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:16:26.199 17:32:52 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:16:26.199 17:32:52 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:16:26.199 17:32:52 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:26.199 17:32:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:26.199 17:32:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:26.199 17:32:52 -- common/autotest_common.sh@10 -- # set +x 00:16:26.199 17:32:52 -- nvmf/common.sh@470 -- # nvmfpid=2029722 00:16:26.199 17:32:52 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:26.199 17:32:52 -- nvmf/common.sh@471 -- # waitforlisten 2029722 00:16:26.199 17:32:52 -- common/autotest_common.sh@817 -- # '[' -z 2029722 ']' 00:16:26.199 17:32:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.199 17:32:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:26.199 17:32:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.199 17:32:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:26.199 17:32:52 -- common/autotest_common.sh@10 -- # set +x 00:16:26.457 [2024-04-18 17:32:52.760647] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:16:26.457 [2024-04-18 17:32:52.760738] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.457 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.457 [2024-04-18 17:32:52.819302] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.457 [2024-04-18 17:32:52.924628] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.457 [2024-04-18 17:32:52.924682] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.457 [2024-04-18 17:32:52.924706] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.457 [2024-04-18 17:32:52.924717] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.457 [2024-04-18 17:32:52.924728] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.457 [2024-04-18 17:32:52.926401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.457 [2024-04-18 17:32:52.926476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:26.457 [2024-04-18 17:32:52.926547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:26.457 [2024-04-18 17:32:52.926551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.716 17:32:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:26.716 17:32:53 -- common/autotest_common.sh@850 -- # return 0 00:16:26.716 17:32:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:26.716 17:32:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:26.716 17:32:53 -- common/autotest_common.sh@10 -- # set +x 00:16:26.716 17:32:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.716 17:32:53 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:26.716 17:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.716 17:32:53 -- common/autotest_common.sh@10 -- # set +x 00:16:26.716 [2024-04-18 17:32:53.091586] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x939e40/0x93e330) succeed. 00:16:26.716 [2024-04-18 17:32:53.102363] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x93b430/0x97f9c0) succeed. 00:16:26.974 17:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.974 17:32:53 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:26.974 17:32:53 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:26.974 17:32:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:26.974 17:32:53 -- common/autotest_common.sh@10 -- # set +x 00:16:26.974 17:32:53 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:26.974 17:32:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:26.974 17:32:53 -- target/shutdown.sh@28 -- # cat 00:16:26.974 17:32:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:26.974 17:32:53 -- target/shutdown.sh@28 -- # cat 00:16:26.974 17:32:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:26.974 17:32:53 -- target/shutdown.sh@28 -- # cat 00:16:26.974 17:32:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:26.974 17:32:53 -- target/shutdown.sh@28 -- # cat 00:16:26.974 17:32:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:26.974 17:32:53 -- target/shutdown.sh@28 -- # cat 00:16:26.974 17:32:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:26.974 17:32:53 -- target/shutdown.sh@28 -- # cat 00:16:26.974 17:32:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:26.974 17:32:53 -- target/shutdown.sh@28 -- # cat 00:16:26.974 17:32:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:26.974 17:32:53 -- target/shutdown.sh@28 -- # cat 00:16:26.974 17:32:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:26.974 17:32:53 -- target/shutdown.sh@28 -- # cat 00:16:26.974 17:32:53 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:26.974 17:32:53 -- target/shutdown.sh@28 -- # cat 00:16:26.974 17:32:53 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:26.974 17:32:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.974 17:32:53 -- common/autotest_common.sh@10 -- # set +x 00:16:26.974 Malloc1 00:16:26.974 [2024-04-18 17:32:53.342953] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:26.974 Malloc2 00:16:26.974 Malloc3 00:16:26.974 Malloc4 00:16:27.231 Malloc5 00:16:27.231 Malloc6 00:16:27.231 Malloc7 00:16:27.231 Malloc8 00:16:27.231 Malloc9 00:16:27.489 Malloc10 00:16:27.489 17:32:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.489 17:32:53 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:27.489 17:32:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:27.489 17:32:53 -- common/autotest_common.sh@10 -- # set +x 00:16:27.489 17:32:53 -- target/shutdown.sh@78 -- # perfpid=2029860 00:16:27.489 17:32:53 -- target/shutdown.sh@79 -- # waitforlisten 2029860 /var/tmp/bdevperf.sock 00:16:27.489 17:32:53 -- common/autotest_common.sh@817 -- # '[' -z 2029860 ']' 00:16:27.489 17:32:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:27.489 17:32:53 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:27.489 17:32:53 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:16:27.489 17:32:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:27.489 17:32:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:27.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:27.489 17:32:53 -- nvmf/common.sh@521 -- # config=() 00:16:27.489 17:32:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:27.489 17:32:53 -- nvmf/common.sh@521 -- # local subsystem config 00:16:27.489 17:32:53 -- common/autotest_common.sh@10 -- # set +x 00:16:27.489 17:32:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:27.489 17:32:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:27.489 { 00:16:27.489 "params": { 00:16:27.489 "name": "Nvme$subsystem", 00:16:27.489 "trtype": "$TEST_TRANSPORT", 00:16:27.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.489 "adrfam": "ipv4", 00:16:27.489 "trsvcid": "$NVMF_PORT", 00:16:27.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.489 "hdgst": ${hdgst:-false}, 00:16:27.489 "ddgst": ${ddgst:-false} 00:16:27.489 }, 00:16:27.489 "method": "bdev_nvme_attach_controller" 00:16:27.489 } 00:16:27.489 EOF 00:16:27.489 )") 00:16:27.489 17:32:53 -- nvmf/common.sh@543 -- # cat 00:16:27.489 17:32:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:27.489 17:32:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:27.489 { 00:16:27.489 "params": { 00:16:27.489 "name": "Nvme$subsystem", 00:16:27.489 "trtype": "$TEST_TRANSPORT", 00:16:27.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.489 "adrfam": "ipv4", 00:16:27.489 "trsvcid": "$NVMF_PORT", 00:16:27.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.489 "hdgst": ${hdgst:-false}, 00:16:27.490 "ddgst": ${ddgst:-false} 00:16:27.490 }, 00:16:27.490 "method": "bdev_nvme_attach_controller" 00:16:27.490 } 00:16:27.490 EOF 00:16:27.490 )") 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # cat 00:16:27.490 17:32:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:27.490 { 00:16:27.490 "params": { 00:16:27.490 "name": "Nvme$subsystem", 00:16:27.490 "trtype": "$TEST_TRANSPORT", 00:16:27.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.490 "adrfam": "ipv4", 00:16:27.490 "trsvcid": "$NVMF_PORT", 00:16:27.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.490 "hdgst": ${hdgst:-false}, 00:16:27.490 "ddgst": ${ddgst:-false} 00:16:27.490 }, 00:16:27.490 "method": "bdev_nvme_attach_controller" 00:16:27.490 } 00:16:27.490 EOF 00:16:27.490 )") 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # cat 00:16:27.490 17:32:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:27.490 { 00:16:27.490 "params": { 00:16:27.490 "name": "Nvme$subsystem", 00:16:27.490 "trtype": "$TEST_TRANSPORT", 00:16:27.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.490 "adrfam": "ipv4", 00:16:27.490 "trsvcid": "$NVMF_PORT", 00:16:27.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.490 "hdgst": ${hdgst:-false}, 00:16:27.490 "ddgst": ${ddgst:-false} 00:16:27.490 }, 00:16:27.490 "method": "bdev_nvme_attach_controller" 00:16:27.490 } 00:16:27.490 EOF 00:16:27.490 )") 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # cat 00:16:27.490 17:32:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:27.490 { 00:16:27.490 "params": { 00:16:27.490 "name": "Nvme$subsystem", 00:16:27.490 "trtype": "$TEST_TRANSPORT", 00:16:27.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.490 "adrfam": "ipv4", 00:16:27.490 "trsvcid": "$NVMF_PORT", 00:16:27.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.490 "hdgst": ${hdgst:-false}, 00:16:27.490 "ddgst": ${ddgst:-false} 00:16:27.490 }, 00:16:27.490 "method": "bdev_nvme_attach_controller" 00:16:27.490 } 00:16:27.490 EOF 00:16:27.490 )") 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # cat 00:16:27.490 17:32:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:27.490 { 00:16:27.490 "params": { 00:16:27.490 "name": "Nvme$subsystem", 00:16:27.490 "trtype": "$TEST_TRANSPORT", 00:16:27.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.490 "adrfam": "ipv4", 00:16:27.490 "trsvcid": "$NVMF_PORT", 00:16:27.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.490 "hdgst": ${hdgst:-false}, 00:16:27.490 "ddgst": ${ddgst:-false} 00:16:27.490 }, 00:16:27.490 "method": "bdev_nvme_attach_controller" 00:16:27.490 } 00:16:27.490 EOF 00:16:27.490 )") 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # cat 00:16:27.490 17:32:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:27.490 { 00:16:27.490 "params": { 00:16:27.490 "name": "Nvme$subsystem", 00:16:27.490 "trtype": "$TEST_TRANSPORT", 00:16:27.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.490 "adrfam": "ipv4", 00:16:27.490 "trsvcid": "$NVMF_PORT", 00:16:27.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.490 "hdgst": ${hdgst:-false}, 00:16:27.490 "ddgst": ${ddgst:-false} 00:16:27.490 }, 00:16:27.490 "method": "bdev_nvme_attach_controller" 00:16:27.490 } 00:16:27.490 EOF 00:16:27.490 )") 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # cat 00:16:27.490 17:32:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:27.490 { 00:16:27.490 "params": { 00:16:27.490 "name": "Nvme$subsystem", 00:16:27.490 "trtype": "$TEST_TRANSPORT", 00:16:27.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.490 "adrfam": "ipv4", 00:16:27.490 "trsvcid": "$NVMF_PORT", 00:16:27.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.490 "hdgst": ${hdgst:-false}, 00:16:27.490 "ddgst": ${ddgst:-false} 00:16:27.490 }, 00:16:27.490 "method": "bdev_nvme_attach_controller" 00:16:27.490 } 00:16:27.490 EOF 00:16:27.490 )") 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # cat 00:16:27.490 17:32:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:27.490 { 00:16:27.490 "params": { 00:16:27.490 "name": "Nvme$subsystem", 00:16:27.490 "trtype": "$TEST_TRANSPORT", 00:16:27.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.490 "adrfam": "ipv4", 00:16:27.490 "trsvcid": "$NVMF_PORT", 00:16:27.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.490 "hdgst": ${hdgst:-false}, 00:16:27.490 "ddgst": ${ddgst:-false} 00:16:27.490 }, 00:16:27.490 "method": "bdev_nvme_attach_controller" 00:16:27.490 } 00:16:27.490 EOF 00:16:27.490 )") 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # cat 00:16:27.490 17:32:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:27.490 { 00:16:27.490 "params": { 00:16:27.490 "name": "Nvme$subsystem", 00:16:27.490 "trtype": "$TEST_TRANSPORT", 00:16:27.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.490 "adrfam": "ipv4", 00:16:27.490 "trsvcid": "$NVMF_PORT", 00:16:27.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.490 "hdgst": ${hdgst:-false}, 00:16:27.490 "ddgst": ${ddgst:-false} 00:16:27.490 }, 00:16:27.490 "method": "bdev_nvme_attach_controller" 00:16:27.490 } 00:16:27.490 EOF 00:16:27.490 )") 00:16:27.490 17:32:53 -- nvmf/common.sh@543 -- # cat 00:16:27.490 17:32:53 -- nvmf/common.sh@545 -- # jq . 00:16:27.490 17:32:53 -- nvmf/common.sh@546 -- # IFS=, 00:16:27.490 17:32:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:27.490 "params": { 00:16:27.490 "name": "Nvme1", 00:16:27.490 "trtype": "rdma", 00:16:27.490 "traddr": "192.168.100.8", 00:16:27.490 "adrfam": "ipv4", 00:16:27.490 "trsvcid": "4420", 00:16:27.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:27.490 "hdgst": false, 00:16:27.490 "ddgst": false 00:16:27.490 }, 00:16:27.490 "method": "bdev_nvme_attach_controller" 00:16:27.490 },{ 00:16:27.490 "params": { 00:16:27.490 "name": "Nvme2", 00:16:27.490 "trtype": "rdma", 00:16:27.490 "traddr": "192.168.100.8", 00:16:27.490 "adrfam": "ipv4", 00:16:27.490 "trsvcid": "4420", 00:16:27.490 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:27.490 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:27.490 "hdgst": false, 00:16:27.490 "ddgst": false 00:16:27.490 }, 00:16:27.490 "method": "bdev_nvme_attach_controller" 00:16:27.490 },{ 00:16:27.490 "params": { 00:16:27.490 "name": "Nvme3", 00:16:27.490 "trtype": "rdma", 00:16:27.490 "traddr": "192.168.100.8", 00:16:27.490 "adrfam": "ipv4", 00:16:27.490 "trsvcid": "4420", 00:16:27.490 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:27.490 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:27.490 "hdgst": false, 00:16:27.490 "ddgst": false 00:16:27.490 }, 00:16:27.490 "method": "bdev_nvme_attach_controller" 00:16:27.490 },{ 00:16:27.490 "params": { 00:16:27.490 "name": "Nvme4", 00:16:27.490 "trtype": "rdma", 00:16:27.490 "traddr": "192.168.100.8", 00:16:27.490 "adrfam": "ipv4", 00:16:27.490 "trsvcid": "4420", 00:16:27.490 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:27.490 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:27.490 "hdgst": false, 00:16:27.490 "ddgst": false 00:16:27.490 }, 00:16:27.490 "method": "bdev_nvme_attach_controller" 00:16:27.490 },{ 00:16:27.490 "params": { 00:16:27.490 "name": "Nvme5", 00:16:27.490 "trtype": "rdma", 00:16:27.490 "traddr": "192.168.100.8", 00:16:27.490 "adrfam": "ipv4", 00:16:27.490 "trsvcid": "4420", 00:16:27.490 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:27.490 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:27.490 "hdgst": false, 00:16:27.490 "ddgst": false 00:16:27.490 }, 00:16:27.490 "method": "bdev_nvme_attach_controller" 00:16:27.490 },{ 00:16:27.490 "params": { 00:16:27.490 "name": "Nvme6", 00:16:27.490 "trtype": "rdma", 00:16:27.490 "traddr": "192.168.100.8", 00:16:27.490 "adrfam": "ipv4", 00:16:27.490 "trsvcid": "4420", 00:16:27.490 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:27.490 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:27.490 "hdgst": false, 00:16:27.490 "ddgst": false 00:16:27.490 }, 00:16:27.490 "method": "bdev_nvme_attach_controller" 00:16:27.490 },{ 00:16:27.490 "params": { 00:16:27.490 "name": "Nvme7", 00:16:27.490 "trtype": "rdma", 00:16:27.490 "traddr": "192.168.100.8", 00:16:27.490 "adrfam": "ipv4", 00:16:27.490 "trsvcid": "4420", 00:16:27.491 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:27.491 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:27.491 "hdgst": false, 00:16:27.491 "ddgst": false 00:16:27.491 }, 00:16:27.491 "method": "bdev_nvme_attach_controller" 00:16:27.491 },{ 00:16:27.491 "params": { 00:16:27.491 "name": "Nvme8", 00:16:27.491 "trtype": "rdma", 00:16:27.491 "traddr": "192.168.100.8", 00:16:27.491 "adrfam": "ipv4", 00:16:27.491 "trsvcid": "4420", 00:16:27.491 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:27.491 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:27.491 "hdgst": false, 00:16:27.491 "ddgst": false 00:16:27.491 }, 00:16:27.491 "method": "bdev_nvme_attach_controller" 00:16:27.491 },{ 00:16:27.491 "params": { 00:16:27.491 "name": "Nvme9", 00:16:27.491 "trtype": "rdma", 00:16:27.491 "traddr": "192.168.100.8", 00:16:27.491 "adrfam": "ipv4", 00:16:27.491 "trsvcid": "4420", 00:16:27.491 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:27.491 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:27.491 "hdgst": false, 00:16:27.491 "ddgst": false 00:16:27.491 }, 00:16:27.491 "method": "bdev_nvme_attach_controller" 00:16:27.491 },{ 00:16:27.491 "params": { 00:16:27.491 "name": "Nvme10", 00:16:27.491 "trtype": "rdma", 00:16:27.491 "traddr": "192.168.100.8", 00:16:27.491 "adrfam": "ipv4", 00:16:27.491 "trsvcid": "4420", 00:16:27.491 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:27.491 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:27.491 "hdgst": false, 00:16:27.491 "ddgst": false 00:16:27.491 }, 00:16:27.491 "method": "bdev_nvme_attach_controller" 00:16:27.491 }' 00:16:27.491 [2024-04-18 17:32:53.863529] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:16:27.491 [2024-04-18 17:32:53.863610] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:27.491 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.491 [2024-04-18 17:32:53.927434] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.748 [2024-04-18 17:32:54.035484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.687 17:32:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:28.687 17:32:54 -- common/autotest_common.sh@850 -- # return 0 00:16:28.687 17:32:54 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:28.687 17:32:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:28.687 17:32:54 -- common/autotest_common.sh@10 -- # set +x 00:16:28.687 17:32:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:28.687 17:32:54 -- target/shutdown.sh@83 -- # kill -9 2029860 00:16:28.687 17:32:54 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:16:28.687 17:32:54 -- target/shutdown.sh@87 -- # sleep 1 00:16:29.623 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2029860 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:16:29.623 17:32:55 -- target/shutdown.sh@88 -- # kill -0 2029722 00:16:29.623 17:32:55 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:29.623 17:32:55 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:29.623 17:32:55 -- nvmf/common.sh@521 -- # config=() 00:16:29.623 17:32:55 -- nvmf/common.sh@521 -- # local subsystem config 00:16:29.623 17:32:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:29.623 17:32:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:29.623 { 00:16:29.623 "params": { 00:16:29.623 "name": "Nvme$subsystem", 00:16:29.623 "trtype": "$TEST_TRANSPORT", 00:16:29.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.623 "adrfam": "ipv4", 00:16:29.624 "trsvcid": "$NVMF_PORT", 00:16:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.624 "hdgst": ${hdgst:-false}, 00:16:29.624 "ddgst": ${ddgst:-false} 00:16:29.624 }, 00:16:29.624 "method": "bdev_nvme_attach_controller" 00:16:29.624 } 00:16:29.624 EOF 00:16:29.624 )") 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # cat 00:16:29.624 17:32:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:29.624 { 00:16:29.624 "params": { 00:16:29.624 "name": "Nvme$subsystem", 00:16:29.624 "trtype": "$TEST_TRANSPORT", 00:16:29.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.624 "adrfam": "ipv4", 00:16:29.624 "trsvcid": "$NVMF_PORT", 00:16:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.624 "hdgst": ${hdgst:-false}, 00:16:29.624 "ddgst": ${ddgst:-false} 00:16:29.624 }, 00:16:29.624 "method": "bdev_nvme_attach_controller" 00:16:29.624 } 00:16:29.624 EOF 00:16:29.624 )") 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # cat 00:16:29.624 17:32:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:29.624 { 00:16:29.624 "params": { 00:16:29.624 "name": "Nvme$subsystem", 00:16:29.624 "trtype": "$TEST_TRANSPORT", 00:16:29.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.624 "adrfam": "ipv4", 00:16:29.624 "trsvcid": "$NVMF_PORT", 00:16:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.624 "hdgst": ${hdgst:-false}, 00:16:29.624 "ddgst": ${ddgst:-false} 00:16:29.624 }, 00:16:29.624 "method": "bdev_nvme_attach_controller" 00:16:29.624 } 00:16:29.624 EOF 00:16:29.624 )") 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # cat 00:16:29.624 17:32:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:29.624 { 00:16:29.624 "params": { 00:16:29.624 "name": "Nvme$subsystem", 00:16:29.624 "trtype": "$TEST_TRANSPORT", 00:16:29.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.624 "adrfam": "ipv4", 00:16:29.624 "trsvcid": "$NVMF_PORT", 00:16:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.624 "hdgst": ${hdgst:-false}, 00:16:29.624 "ddgst": ${ddgst:-false} 00:16:29.624 }, 00:16:29.624 "method": "bdev_nvme_attach_controller" 00:16:29.624 } 00:16:29.624 EOF 00:16:29.624 )") 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # cat 00:16:29.624 17:32:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:29.624 { 00:16:29.624 "params": { 00:16:29.624 "name": "Nvme$subsystem", 00:16:29.624 "trtype": "$TEST_TRANSPORT", 00:16:29.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.624 "adrfam": "ipv4", 00:16:29.624 "trsvcid": "$NVMF_PORT", 00:16:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.624 "hdgst": ${hdgst:-false}, 00:16:29.624 "ddgst": ${ddgst:-false} 00:16:29.624 }, 00:16:29.624 "method": "bdev_nvme_attach_controller" 00:16:29.624 } 00:16:29.624 EOF 00:16:29.624 )") 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # cat 00:16:29.624 17:32:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:29.624 { 00:16:29.624 "params": { 00:16:29.624 "name": "Nvme$subsystem", 00:16:29.624 "trtype": "$TEST_TRANSPORT", 00:16:29.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.624 "adrfam": "ipv4", 00:16:29.624 "trsvcid": "$NVMF_PORT", 00:16:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.624 "hdgst": ${hdgst:-false}, 00:16:29.624 "ddgst": ${ddgst:-false} 00:16:29.624 }, 00:16:29.624 "method": "bdev_nvme_attach_controller" 00:16:29.624 } 00:16:29.624 EOF 00:16:29.624 )") 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # cat 00:16:29.624 17:32:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:29.624 { 00:16:29.624 "params": { 00:16:29.624 "name": "Nvme$subsystem", 00:16:29.624 "trtype": "$TEST_TRANSPORT", 00:16:29.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.624 "adrfam": "ipv4", 00:16:29.624 "trsvcid": "$NVMF_PORT", 00:16:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.624 "hdgst": ${hdgst:-false}, 00:16:29.624 "ddgst": ${ddgst:-false} 00:16:29.624 }, 00:16:29.624 "method": "bdev_nvme_attach_controller" 00:16:29.624 } 00:16:29.624 EOF 00:16:29.624 )") 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # cat 00:16:29.624 17:32:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:29.624 { 00:16:29.624 "params": { 00:16:29.624 "name": "Nvme$subsystem", 00:16:29.624 "trtype": "$TEST_TRANSPORT", 00:16:29.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.624 "adrfam": "ipv4", 00:16:29.624 "trsvcid": "$NVMF_PORT", 00:16:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.624 "hdgst": ${hdgst:-false}, 00:16:29.624 "ddgst": ${ddgst:-false} 00:16:29.624 }, 00:16:29.624 "method": "bdev_nvme_attach_controller" 00:16:29.624 } 00:16:29.624 EOF 00:16:29.624 )") 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # cat 00:16:29.624 17:32:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:29.624 { 00:16:29.624 "params": { 00:16:29.624 "name": "Nvme$subsystem", 00:16:29.624 "trtype": "$TEST_TRANSPORT", 00:16:29.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.624 "adrfam": "ipv4", 00:16:29.624 "trsvcid": "$NVMF_PORT", 00:16:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.624 "hdgst": ${hdgst:-false}, 00:16:29.624 "ddgst": ${ddgst:-false} 00:16:29.624 }, 00:16:29.624 "method": "bdev_nvme_attach_controller" 00:16:29.624 } 00:16:29.624 EOF 00:16:29.624 )") 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # cat 00:16:29.624 17:32:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:29.624 { 00:16:29.624 "params": { 00:16:29.624 "name": "Nvme$subsystem", 00:16:29.624 "trtype": "$TEST_TRANSPORT", 00:16:29.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.624 "adrfam": "ipv4", 00:16:29.624 "trsvcid": "$NVMF_PORT", 00:16:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.624 "hdgst": ${hdgst:-false}, 00:16:29.624 "ddgst": ${ddgst:-false} 00:16:29.624 }, 00:16:29.624 "method": "bdev_nvme_attach_controller" 00:16:29.624 } 00:16:29.624 EOF 00:16:29.624 )") 00:16:29.624 17:32:55 -- nvmf/common.sh@543 -- # cat 00:16:29.624 17:32:55 -- nvmf/common.sh@545 -- # jq . 00:16:29.624 17:32:55 -- nvmf/common.sh@546 -- # IFS=, 00:16:29.624 17:32:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:29.624 "params": { 00:16:29.624 "name": "Nvme1", 00:16:29.624 "trtype": "rdma", 00:16:29.624 "traddr": "192.168.100.8", 00:16:29.624 "adrfam": "ipv4", 00:16:29.624 "trsvcid": "4420", 00:16:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.624 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:29.624 "hdgst": false, 00:16:29.624 "ddgst": false 00:16:29.624 }, 00:16:29.624 "method": "bdev_nvme_attach_controller" 00:16:29.624 },{ 00:16:29.624 "params": { 00:16:29.624 "name": "Nvme2", 00:16:29.624 "trtype": "rdma", 00:16:29.624 "traddr": "192.168.100.8", 00:16:29.624 "adrfam": "ipv4", 00:16:29.624 "trsvcid": "4420", 00:16:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:29.624 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:29.624 "hdgst": false, 00:16:29.624 "ddgst": false 00:16:29.624 }, 00:16:29.624 "method": "bdev_nvme_attach_controller" 00:16:29.624 },{ 00:16:29.624 "params": { 00:16:29.624 "name": "Nvme3", 00:16:29.624 "trtype": "rdma", 00:16:29.624 "traddr": "192.168.100.8", 00:16:29.624 "adrfam": "ipv4", 00:16:29.624 "trsvcid": "4420", 00:16:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:29.624 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:29.624 "hdgst": false, 00:16:29.624 "ddgst": false 00:16:29.624 }, 00:16:29.624 "method": "bdev_nvme_attach_controller" 00:16:29.624 },{ 00:16:29.624 "params": { 00:16:29.624 "name": "Nvme4", 00:16:29.624 "trtype": "rdma", 00:16:29.624 "traddr": "192.168.100.8", 00:16:29.624 "adrfam": "ipv4", 00:16:29.624 "trsvcid": "4420", 00:16:29.624 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:29.624 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:29.624 "hdgst": false, 00:16:29.624 "ddgst": false 00:16:29.624 }, 00:16:29.624 "method": "bdev_nvme_attach_controller" 00:16:29.624 },{ 00:16:29.624 "params": { 00:16:29.624 "name": "Nvme5", 00:16:29.624 "trtype": "rdma", 00:16:29.624 "traddr": "192.168.100.8", 00:16:29.624 "adrfam": "ipv4", 00:16:29.624 "trsvcid": "4420", 00:16:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:29.625 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:29.625 "hdgst": false, 00:16:29.625 "ddgst": false 00:16:29.625 }, 00:16:29.625 "method": "bdev_nvme_attach_controller" 00:16:29.625 },{ 00:16:29.625 "params": { 00:16:29.625 "name": "Nvme6", 00:16:29.625 "trtype": "rdma", 00:16:29.625 "traddr": "192.168.100.8", 00:16:29.625 "adrfam": "ipv4", 00:16:29.625 "trsvcid": "4420", 00:16:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:29.625 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:29.625 "hdgst": false, 00:16:29.625 "ddgst": false 00:16:29.625 }, 00:16:29.625 "method": "bdev_nvme_attach_controller" 00:16:29.625 },{ 00:16:29.625 "params": { 00:16:29.625 "name": "Nvme7", 00:16:29.625 "trtype": "rdma", 00:16:29.625 "traddr": "192.168.100.8", 00:16:29.625 "adrfam": "ipv4", 00:16:29.625 "trsvcid": "4420", 00:16:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:29.625 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:29.625 "hdgst": false, 00:16:29.625 "ddgst": false 00:16:29.625 }, 00:16:29.625 "method": "bdev_nvme_attach_controller" 00:16:29.625 },{ 00:16:29.625 "params": { 00:16:29.625 "name": "Nvme8", 00:16:29.625 "trtype": "rdma", 00:16:29.625 "traddr": "192.168.100.8", 00:16:29.625 "adrfam": "ipv4", 00:16:29.625 "trsvcid": "4420", 00:16:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:29.625 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:29.625 "hdgst": false, 00:16:29.625 "ddgst": false 00:16:29.625 }, 00:16:29.625 "method": "bdev_nvme_attach_controller" 00:16:29.625 },{ 00:16:29.625 "params": { 00:16:29.625 "name": "Nvme9", 00:16:29.625 "trtype": "rdma", 00:16:29.625 "traddr": "192.168.100.8", 00:16:29.625 "adrfam": "ipv4", 00:16:29.625 "trsvcid": "4420", 00:16:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:29.625 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:29.625 "hdgst": false, 00:16:29.625 "ddgst": false 00:16:29.625 }, 00:16:29.625 "method": "bdev_nvme_attach_controller" 00:16:29.625 },{ 00:16:29.625 "params": { 00:16:29.625 "name": "Nvme10", 00:16:29.625 "trtype": "rdma", 00:16:29.625 "traddr": "192.168.100.8", 00:16:29.625 "adrfam": "ipv4", 00:16:29.625 "trsvcid": "4420", 00:16:29.625 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:29.625 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:29.625 "hdgst": false, 00:16:29.625 "ddgst": false 00:16:29.625 }, 00:16:29.625 "method": "bdev_nvme_attach_controller" 00:16:29.625 }' 00:16:29.625 [2024-04-18 17:32:55.947803] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:16:29.625 [2024-04-18 17:32:55.947896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2030084 ] 00:16:29.625 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.625 [2024-04-18 17:32:56.012376] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.625 [2024-04-18 17:32:56.121982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.562 Running I/O for 1 seconds... 00:16:31.941 00:16:31.941 Latency(us) 00:16:31.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.941 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.941 Verification LBA range: start 0x0 length 0x400 00:16:31.941 Nvme1n1 : 1.19 295.77 18.49 0.00 0.00 212656.60 16019.91 267192.70 00:16:31.941 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.941 Verification LBA range: start 0x0 length 0x400 00:16:31.941 Nvme2n1 : 1.19 322.23 20.14 0.00 0.00 192211.37 7136.14 187966.96 00:16:31.941 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.941 Verification LBA range: start 0x0 length 0x400 00:16:31.941 Nvme3n1 : 1.19 321.80 20.11 0.00 0.00 189554.54 13398.47 180199.73 00:16:31.941 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.941 Verification LBA range: start 0x0 length 0x400 00:16:31.941 Nvme4n1 : 1.19 321.37 20.09 0.00 0.00 186843.34 16893.72 172432.50 00:16:31.941 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.941 Verification LBA range: start 0x0 length 0x400 00:16:31.941 Nvme5n1 : 1.20 320.85 20.05 0.00 0.00 184862.85 17573.36 160004.93 00:16:31.941 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.941 Verification LBA range: start 0x0 length 0x400 00:16:31.941 Nvme6n1 : 1.20 320.38 20.02 0.00 0.00 181834.71 18252.99 149130.81 00:16:31.941 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.941 Verification LBA range: start 0x0 length 0x400 00:16:31.941 Nvme7n1 : 1.20 319.95 20.00 0.00 0.00 178655.89 18641.35 141363.58 00:16:31.941 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.941 Verification LBA range: start 0x0 length 0x400 00:16:31.941 Nvme8n1 : 1.21 332.92 20.81 0.00 0.00 169880.86 5024.43 129712.73 00:16:31.941 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.941 Verification LBA range: start 0x0 length 0x400 00:16:31.941 Nvme9n1 : 1.20 318.97 19.94 0.00 0.00 174875.94 15049.01 112624.83 00:16:31.941 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.941 Verification LBA range: start 0x0 length 0x400 00:16:31.941 Nvme10n1 : 1.21 318.37 19.90 0.00 0.00 172084.59 16408.27 127382.57 00:16:31.941 =================================================================================================================== 00:16:31.941 Total : 3192.61 199.54 0.00 0.00 184041.23 5024.43 267192.70 00:16:32.199 17:32:58 -- target/shutdown.sh@94 -- # stoptarget 00:16:32.199 17:32:58 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:32.199 17:32:58 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:32.199 17:32:58 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:32.199 17:32:58 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:32.199 17:32:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:32.199 17:32:58 -- nvmf/common.sh@117 -- # sync 00:16:32.199 17:32:58 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:32.199 17:32:58 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:32.199 17:32:58 -- nvmf/common.sh@120 -- # set +e 00:16:32.199 17:32:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:32.199 17:32:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:32.199 rmmod nvme_rdma 00:16:32.199 rmmod nvme_fabrics 00:16:32.199 17:32:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:32.199 17:32:58 -- nvmf/common.sh@124 -- # set -e 00:16:32.199 17:32:58 -- nvmf/common.sh@125 -- # return 0 00:16:32.199 17:32:58 -- nvmf/common.sh@478 -- # '[' -n 2029722 ']' 00:16:32.199 17:32:58 -- nvmf/common.sh@479 -- # killprocess 2029722 00:16:32.199 17:32:58 -- common/autotest_common.sh@936 -- # '[' -z 2029722 ']' 00:16:32.200 17:32:58 -- common/autotest_common.sh@940 -- # kill -0 2029722 00:16:32.200 17:32:58 -- common/autotest_common.sh@941 -- # uname 00:16:32.200 17:32:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:32.200 17:32:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2029722 00:16:32.200 17:32:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:32.200 17:32:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:32.200 17:32:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2029722' 00:16:32.200 killing process with pid 2029722 00:16:32.200 17:32:58 -- common/autotest_common.sh@955 -- # kill 2029722 00:16:32.200 17:32:58 -- common/autotest_common.sh@960 -- # wait 2029722 00:16:32.764 17:32:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:32.764 17:32:59 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:16:32.764 00:16:32.764 real 0m8.662s 00:16:32.764 user 0m28.652s 00:16:32.764 sys 0m2.511s 00:16:32.764 17:32:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:32.764 17:32:59 -- common/autotest_common.sh@10 -- # set +x 00:16:32.764 ************************************ 00:16:32.764 END TEST nvmf_shutdown_tc1 00:16:32.764 ************************************ 00:16:32.764 17:32:59 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:16:32.764 17:32:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:32.764 17:32:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:32.764 17:32:59 -- common/autotest_common.sh@10 -- # set +x 00:16:33.024 ************************************ 00:16:33.024 START TEST nvmf_shutdown_tc2 00:16:33.024 ************************************ 00:16:33.024 17:32:59 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:16:33.024 17:32:59 -- target/shutdown.sh@99 -- # starttarget 00:16:33.024 17:32:59 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:33.024 17:32:59 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:16:33.024 17:32:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.024 17:32:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:33.024 17:32:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:33.024 17:32:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:33.024 17:32:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.024 17:32:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.024 17:32:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.024 17:32:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:33.024 17:32:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:33.024 17:32:59 -- common/autotest_common.sh@10 -- # set +x 00:16:33.024 17:32:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:33.024 17:32:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:33.024 17:32:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:33.024 17:32:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:33.024 17:32:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:33.024 17:32:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:33.024 17:32:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:33.024 17:32:59 -- nvmf/common.sh@295 -- # net_devs=() 00:16:33.024 17:32:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:33.024 17:32:59 -- nvmf/common.sh@296 -- # e810=() 00:16:33.024 17:32:59 -- nvmf/common.sh@296 -- # local -ga e810 00:16:33.024 17:32:59 -- nvmf/common.sh@297 -- # x722=() 00:16:33.024 17:32:59 -- nvmf/common.sh@297 -- # local -ga x722 00:16:33.024 17:32:59 -- nvmf/common.sh@298 -- # mlx=() 00:16:33.024 17:32:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:33.024 17:32:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.024 17:32:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.024 17:32:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.024 17:32:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.024 17:32:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.024 17:32:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.024 17:32:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.024 17:32:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.024 17:32:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.024 17:32:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.024 17:32:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.024 17:32:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:33.024 17:32:59 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:33.024 17:32:59 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:33.024 17:32:59 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:33.024 17:32:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:33.024 17:32:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.024 17:32:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:16:33.024 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:16:33.024 17:32:59 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:33.024 17:32:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.024 17:32:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:16:33.024 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:16:33.024 17:32:59 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:33.024 17:32:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:33.024 17:32:59 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.024 17:32:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.024 17:32:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:33.024 17:32:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.024 17:32:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:16:33.024 Found net devices under 0000:09:00.0: mlx_0_0 00:16:33.024 17:32:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.024 17:32:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.024 17:32:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.024 17:32:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:33.024 17:32:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.024 17:32:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:16:33.024 Found net devices under 0000:09:00.1: mlx_0_1 00:16:33.024 17:32:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.024 17:32:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:33.024 17:32:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:33.024 17:32:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@409 -- # rdma_device_init 00:16:33.024 17:32:59 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:16:33.024 17:32:59 -- nvmf/common.sh@58 -- # uname 00:16:33.024 17:32:59 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:33.024 17:32:59 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:33.024 17:32:59 -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:33.024 17:32:59 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:33.024 17:32:59 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:33.024 17:32:59 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:33.024 17:32:59 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:33.024 17:32:59 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:33.024 17:32:59 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:16:33.024 17:32:59 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:33.024 17:32:59 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:33.024 17:32:59 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:33.024 17:32:59 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:33.024 17:32:59 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:33.024 17:32:59 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:33.024 17:32:59 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:33.024 17:32:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:33.024 17:32:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.024 17:32:59 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:33.024 17:32:59 -- nvmf/common.sh@105 -- # continue 2 00:16:33.024 17:32:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:33.024 17:32:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.024 17:32:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.024 17:32:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:33.024 17:32:59 -- nvmf/common.sh@105 -- # continue 2 00:16:33.024 17:32:59 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:33.024 17:32:59 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:33.024 17:32:59 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:33.024 17:32:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:33.024 17:32:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:33.024 17:32:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:33.024 17:32:59 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:33.024 17:32:59 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:33.024 82: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:33.024 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:16:33.024 altname enp9s0f0np0 00:16:33.024 inet 192.168.100.8/24 scope global mlx_0_0 00:16:33.024 valid_lft forever preferred_lft forever 00:16:33.024 17:32:59 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:33.024 17:32:59 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:33.024 17:32:59 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:33.024 17:32:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:33.024 17:32:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:33.024 17:32:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:33.024 17:32:59 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:33.024 17:32:59 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:33.024 83: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:33.024 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:16:33.024 altname enp9s0f1np1 00:16:33.024 inet 192.168.100.9/24 scope global mlx_0_1 00:16:33.024 valid_lft forever preferred_lft forever 00:16:33.024 17:32:59 -- nvmf/common.sh@411 -- # return 0 00:16:33.024 17:32:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:33.024 17:32:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:33.024 17:32:59 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:16:33.024 17:32:59 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:16:33.024 17:32:59 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:33.024 17:32:59 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:33.024 17:32:59 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:33.024 17:32:59 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:33.024 17:32:59 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:33.024 17:32:59 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:33.025 17:32:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:33.025 17:32:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.025 17:32:59 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:33.025 17:32:59 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:33.025 17:32:59 -- nvmf/common.sh@105 -- # continue 2 00:16:33.025 17:32:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:33.025 17:32:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.025 17:32:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:33.025 17:32:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.025 17:32:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:33.025 17:32:59 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:33.025 17:32:59 -- nvmf/common.sh@105 -- # continue 2 00:16:33.025 17:32:59 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:33.025 17:32:59 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:33.025 17:32:59 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:33.025 17:32:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:33.025 17:32:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:33.025 17:32:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:33.025 17:32:59 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:33.025 17:32:59 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:33.025 17:32:59 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:33.025 17:32:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:33.025 17:32:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:33.025 17:32:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:33.025 17:32:59 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:16:33.025 192.168.100.9' 00:16:33.025 17:32:59 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:33.025 192.168.100.9' 00:16:33.025 17:32:59 -- nvmf/common.sh@446 -- # head -n 1 00:16:33.025 17:32:59 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:33.025 17:32:59 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:16:33.025 192.168.100.9' 00:16:33.025 17:32:59 -- nvmf/common.sh@447 -- # tail -n +2 00:16:33.025 17:32:59 -- nvmf/common.sh@447 -- # head -n 1 00:16:33.025 17:32:59 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:33.025 17:32:59 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:16:33.025 17:32:59 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:33.025 17:32:59 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:16:33.025 17:32:59 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:16:33.025 17:32:59 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:16:33.025 17:32:59 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:33.025 17:32:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:33.025 17:32:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:33.025 17:32:59 -- common/autotest_common.sh@10 -- # set +x 00:16:33.025 17:32:59 -- nvmf/common.sh@470 -- # nvmfpid=2030696 00:16:33.025 17:32:59 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:33.025 17:32:59 -- nvmf/common.sh@471 -- # waitforlisten 2030696 00:16:33.025 17:32:59 -- common/autotest_common.sh@817 -- # '[' -z 2030696 ']' 00:16:33.025 17:32:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.025 17:32:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:33.025 17:32:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.025 17:32:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:33.025 17:32:59 -- common/autotest_common.sh@10 -- # set +x 00:16:33.025 [2024-04-18 17:32:59.561560] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:16:33.025 [2024-04-18 17:32:59.561650] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.285 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.285 [2024-04-18 17:32:59.625739] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:33.285 [2024-04-18 17:32:59.740609] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.285 [2024-04-18 17:32:59.740675] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.285 [2024-04-18 17:32:59.740699] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.285 [2024-04-18 17:32:59.740712] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.285 [2024-04-18 17:32:59.740724] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.285 [2024-04-18 17:32:59.740824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.285 [2024-04-18 17:32:59.740921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:33.285 [2024-04-18 17:32:59.740986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:33.285 [2024-04-18 17:32:59.740989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.220 17:33:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:34.220 17:33:00 -- common/autotest_common.sh@850 -- # return 0 00:16:34.220 17:33:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:34.220 17:33:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:34.220 17:33:00 -- common/autotest_common.sh@10 -- # set +x 00:16:34.220 17:33:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.220 17:33:00 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:34.220 17:33:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.220 17:33:00 -- common/autotest_common.sh@10 -- # set +x 00:16:34.220 [2024-04-18 17:33:00.560292] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11dee40/0x11e3330) succeed. 00:16:34.220 [2024-04-18 17:33:00.571498] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11e0430/0x12249c0) succeed. 00:16:34.220 17:33:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.220 17:33:00 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:34.220 17:33:00 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:34.220 17:33:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:34.220 17:33:00 -- common/autotest_common.sh@10 -- # set +x 00:16:34.220 17:33:00 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:34.220 17:33:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:34.220 17:33:00 -- target/shutdown.sh@28 -- # cat 00:16:34.220 17:33:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:34.220 17:33:00 -- target/shutdown.sh@28 -- # cat 00:16:34.220 17:33:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:34.220 17:33:00 -- target/shutdown.sh@28 -- # cat 00:16:34.220 17:33:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:34.220 17:33:00 -- target/shutdown.sh@28 -- # cat 00:16:34.220 17:33:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:34.220 17:33:00 -- target/shutdown.sh@28 -- # cat 00:16:34.220 17:33:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:34.220 17:33:00 -- target/shutdown.sh@28 -- # cat 00:16:34.220 17:33:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:34.220 17:33:00 -- target/shutdown.sh@28 -- # cat 00:16:34.220 17:33:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:34.220 17:33:00 -- target/shutdown.sh@28 -- # cat 00:16:34.220 17:33:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:34.220 17:33:00 -- target/shutdown.sh@28 -- # cat 00:16:34.220 17:33:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:34.220 17:33:00 -- target/shutdown.sh@28 -- # cat 00:16:34.220 17:33:00 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:34.220 17:33:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.220 17:33:00 -- common/autotest_common.sh@10 -- # set +x 00:16:34.478 Malloc1 00:16:34.478 [2024-04-18 17:33:00.787140] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:34.478 Malloc2 00:16:34.478 Malloc3 00:16:34.478 Malloc4 00:16:34.478 Malloc5 00:16:34.478 Malloc6 00:16:34.736 Malloc7 00:16:34.736 Malloc8 00:16:34.736 Malloc9 00:16:34.736 Malloc10 00:16:34.736 17:33:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.736 17:33:01 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:34.736 17:33:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:34.736 17:33:01 -- common/autotest_common.sh@10 -- # set +x 00:16:34.736 17:33:01 -- target/shutdown.sh@103 -- # perfpid=2030972 00:16:34.736 17:33:01 -- target/shutdown.sh@104 -- # waitforlisten 2030972 /var/tmp/bdevperf.sock 00:16:34.736 17:33:01 -- common/autotest_common.sh@817 -- # '[' -z 2030972 ']' 00:16:34.736 17:33:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:34.736 17:33:01 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:34.736 17:33:01 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:34.736 17:33:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:34.736 17:33:01 -- nvmf/common.sh@521 -- # config=() 00:16:34.736 17:33:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:34.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:34.736 17:33:01 -- nvmf/common.sh@521 -- # local subsystem config 00:16:34.736 17:33:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:34.736 17:33:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.736 17:33:01 -- common/autotest_common.sh@10 -- # set +x 00:16:34.736 17:33:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.736 { 00:16:34.736 "params": { 00:16:34.736 "name": "Nvme$subsystem", 00:16:34.736 "trtype": "$TEST_TRANSPORT", 00:16:34.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.736 "adrfam": "ipv4", 00:16:34.736 "trsvcid": "$NVMF_PORT", 00:16:34.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.737 "hdgst": ${hdgst:-false}, 00:16:34.737 "ddgst": ${ddgst:-false} 00:16:34.737 }, 00:16:34.737 "method": "bdev_nvme_attach_controller" 00:16:34.737 } 00:16:34.737 EOF 00:16:34.737 )") 00:16:34.737 17:33:01 -- nvmf/common.sh@543 -- # cat 00:16:34.737 17:33:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.737 17:33:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.737 { 00:16:34.737 "params": { 00:16:34.737 "name": "Nvme$subsystem", 00:16:34.737 "trtype": "$TEST_TRANSPORT", 00:16:34.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.737 "adrfam": "ipv4", 00:16:34.737 "trsvcid": "$NVMF_PORT", 00:16:34.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.737 "hdgst": ${hdgst:-false}, 00:16:34.737 "ddgst": ${ddgst:-false} 00:16:34.737 }, 00:16:34.737 "method": "bdev_nvme_attach_controller" 00:16:34.737 } 00:16:34.737 EOF 00:16:34.737 )") 00:16:34.737 17:33:01 -- nvmf/common.sh@543 -- # cat 00:16:34.737 17:33:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.737 17:33:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.737 { 00:16:34.737 "params": { 00:16:34.737 "name": "Nvme$subsystem", 00:16:34.737 "trtype": "$TEST_TRANSPORT", 00:16:34.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.737 "adrfam": "ipv4", 00:16:34.737 "trsvcid": "$NVMF_PORT", 00:16:34.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.737 "hdgst": ${hdgst:-false}, 00:16:34.737 "ddgst": ${ddgst:-false} 00:16:34.737 }, 00:16:34.737 "method": "bdev_nvme_attach_controller" 00:16:34.737 } 00:16:34.737 EOF 00:16:34.737 )") 00:16:34.737 17:33:01 -- nvmf/common.sh@543 -- # cat 00:16:34.737 17:33:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.737 17:33:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.737 { 00:16:34.737 "params": { 00:16:34.737 "name": "Nvme$subsystem", 00:16:34.737 "trtype": "$TEST_TRANSPORT", 00:16:34.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.737 "adrfam": "ipv4", 00:16:34.737 "trsvcid": "$NVMF_PORT", 00:16:34.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.737 "hdgst": ${hdgst:-false}, 00:16:34.737 "ddgst": ${ddgst:-false} 00:16:34.737 }, 00:16:34.737 "method": "bdev_nvme_attach_controller" 00:16:34.737 } 00:16:34.737 EOF 00:16:34.737 )") 00:16:34.737 17:33:01 -- nvmf/common.sh@543 -- # cat 00:16:34.737 17:33:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.737 17:33:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.737 { 00:16:34.737 "params": { 00:16:34.737 "name": "Nvme$subsystem", 00:16:34.737 "trtype": "$TEST_TRANSPORT", 00:16:34.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.737 "adrfam": "ipv4", 00:16:34.737 "trsvcid": "$NVMF_PORT", 00:16:34.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.737 "hdgst": ${hdgst:-false}, 00:16:34.737 "ddgst": ${ddgst:-false} 00:16:34.737 }, 00:16:34.737 "method": "bdev_nvme_attach_controller" 00:16:34.737 } 00:16:34.737 EOF 00:16:34.737 )") 00:16:34.737 17:33:01 -- nvmf/common.sh@543 -- # cat 00:16:34.737 17:33:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.737 17:33:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.737 { 00:16:34.737 "params": { 00:16:34.737 "name": "Nvme$subsystem", 00:16:34.737 "trtype": "$TEST_TRANSPORT", 00:16:34.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.737 "adrfam": "ipv4", 00:16:34.737 "trsvcid": "$NVMF_PORT", 00:16:34.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.737 "hdgst": ${hdgst:-false}, 00:16:34.737 "ddgst": ${ddgst:-false} 00:16:34.737 }, 00:16:34.737 "method": "bdev_nvme_attach_controller" 00:16:34.737 } 00:16:34.737 EOF 00:16:34.737 )") 00:16:34.737 17:33:01 -- nvmf/common.sh@543 -- # cat 00:16:34.737 17:33:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.737 17:33:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.737 { 00:16:34.737 "params": { 00:16:34.737 "name": "Nvme$subsystem", 00:16:34.737 "trtype": "$TEST_TRANSPORT", 00:16:34.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.737 "adrfam": "ipv4", 00:16:34.737 "trsvcid": "$NVMF_PORT", 00:16:34.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.737 "hdgst": ${hdgst:-false}, 00:16:34.737 "ddgst": ${ddgst:-false} 00:16:34.737 }, 00:16:34.737 "method": "bdev_nvme_attach_controller" 00:16:34.737 } 00:16:34.737 EOF 00:16:34.737 )") 00:16:34.737 17:33:01 -- nvmf/common.sh@543 -- # cat 00:16:34.737 17:33:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.737 17:33:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.737 { 00:16:34.737 "params": { 00:16:34.737 "name": "Nvme$subsystem", 00:16:34.737 "trtype": "$TEST_TRANSPORT", 00:16:34.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.737 "adrfam": "ipv4", 00:16:34.737 "trsvcid": "$NVMF_PORT", 00:16:34.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.737 "hdgst": ${hdgst:-false}, 00:16:34.737 "ddgst": ${ddgst:-false} 00:16:34.737 }, 00:16:34.737 "method": "bdev_nvme_attach_controller" 00:16:34.737 } 00:16:34.737 EOF 00:16:34.737 )") 00:16:34.995 17:33:01 -- nvmf/common.sh@543 -- # cat 00:16:34.995 17:33:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.995 17:33:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.995 { 00:16:34.995 "params": { 00:16:34.995 "name": "Nvme$subsystem", 00:16:34.995 "trtype": "$TEST_TRANSPORT", 00:16:34.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.995 "adrfam": "ipv4", 00:16:34.995 "trsvcid": "$NVMF_PORT", 00:16:34.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.995 "hdgst": ${hdgst:-false}, 00:16:34.995 "ddgst": ${ddgst:-false} 00:16:34.995 }, 00:16:34.995 "method": "bdev_nvme_attach_controller" 00:16:34.995 } 00:16:34.995 EOF 00:16:34.995 )") 00:16:34.995 17:33:01 -- nvmf/common.sh@543 -- # cat 00:16:34.995 17:33:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:34.995 17:33:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:34.995 { 00:16:34.995 "params": { 00:16:34.995 "name": "Nvme$subsystem", 00:16:34.995 "trtype": "$TEST_TRANSPORT", 00:16:34.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.995 "adrfam": "ipv4", 00:16:34.995 "trsvcid": "$NVMF_PORT", 00:16:34.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.995 "hdgst": ${hdgst:-false}, 00:16:34.995 "ddgst": ${ddgst:-false} 00:16:34.995 }, 00:16:34.995 "method": "bdev_nvme_attach_controller" 00:16:34.995 } 00:16:34.995 EOF 00:16:34.995 )") 00:16:34.995 17:33:01 -- nvmf/common.sh@543 -- # cat 00:16:34.995 17:33:01 -- nvmf/common.sh@545 -- # jq . 00:16:34.995 17:33:01 -- nvmf/common.sh@546 -- # IFS=, 00:16:34.995 17:33:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:34.995 "params": { 00:16:34.995 "name": "Nvme1", 00:16:34.995 "trtype": "rdma", 00:16:34.995 "traddr": "192.168.100.8", 00:16:34.995 "adrfam": "ipv4", 00:16:34.995 "trsvcid": "4420", 00:16:34.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:34.995 "hdgst": false, 00:16:34.995 "ddgst": false 00:16:34.995 }, 00:16:34.995 "method": "bdev_nvme_attach_controller" 00:16:34.995 },{ 00:16:34.995 "params": { 00:16:34.995 "name": "Nvme2", 00:16:34.995 "trtype": "rdma", 00:16:34.995 "traddr": "192.168.100.8", 00:16:34.995 "adrfam": "ipv4", 00:16:34.995 "trsvcid": "4420", 00:16:34.995 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:34.995 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:34.995 "hdgst": false, 00:16:34.995 "ddgst": false 00:16:34.995 }, 00:16:34.995 "method": "bdev_nvme_attach_controller" 00:16:34.995 },{ 00:16:34.995 "params": { 00:16:34.995 "name": "Nvme3", 00:16:34.995 "trtype": "rdma", 00:16:34.995 "traddr": "192.168.100.8", 00:16:34.995 "adrfam": "ipv4", 00:16:34.995 "trsvcid": "4420", 00:16:34.995 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:34.995 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:34.995 "hdgst": false, 00:16:34.995 "ddgst": false 00:16:34.995 }, 00:16:34.995 "method": "bdev_nvme_attach_controller" 00:16:34.995 },{ 00:16:34.995 "params": { 00:16:34.995 "name": "Nvme4", 00:16:34.995 "trtype": "rdma", 00:16:34.995 "traddr": "192.168.100.8", 00:16:34.995 "adrfam": "ipv4", 00:16:34.995 "trsvcid": "4420", 00:16:34.995 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:34.995 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:34.995 "hdgst": false, 00:16:34.995 "ddgst": false 00:16:34.995 }, 00:16:34.995 "method": "bdev_nvme_attach_controller" 00:16:34.995 },{ 00:16:34.995 "params": { 00:16:34.995 "name": "Nvme5", 00:16:34.995 "trtype": "rdma", 00:16:34.995 "traddr": "192.168.100.8", 00:16:34.995 "adrfam": "ipv4", 00:16:34.995 "trsvcid": "4420", 00:16:34.995 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:34.995 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:34.995 "hdgst": false, 00:16:34.995 "ddgst": false 00:16:34.995 }, 00:16:34.995 "method": "bdev_nvme_attach_controller" 00:16:34.995 },{ 00:16:34.995 "params": { 00:16:34.995 "name": "Nvme6", 00:16:34.995 "trtype": "rdma", 00:16:34.995 "traddr": "192.168.100.8", 00:16:34.995 "adrfam": "ipv4", 00:16:34.995 "trsvcid": "4420", 00:16:34.995 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:34.995 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:34.995 "hdgst": false, 00:16:34.995 "ddgst": false 00:16:34.995 }, 00:16:34.995 "method": "bdev_nvme_attach_controller" 00:16:34.995 },{ 00:16:34.995 "params": { 00:16:34.995 "name": "Nvme7", 00:16:34.995 "trtype": "rdma", 00:16:34.996 "traddr": "192.168.100.8", 00:16:34.996 "adrfam": "ipv4", 00:16:34.996 "trsvcid": "4420", 00:16:34.996 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:34.996 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:34.996 "hdgst": false, 00:16:34.996 "ddgst": false 00:16:34.996 }, 00:16:34.996 "method": "bdev_nvme_attach_controller" 00:16:34.996 },{ 00:16:34.996 "params": { 00:16:34.996 "name": "Nvme8", 00:16:34.996 "trtype": "rdma", 00:16:34.996 "traddr": "192.168.100.8", 00:16:34.996 "adrfam": "ipv4", 00:16:34.996 "trsvcid": "4420", 00:16:34.996 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:34.996 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:34.996 "hdgst": false, 00:16:34.996 "ddgst": false 00:16:34.996 }, 00:16:34.996 "method": "bdev_nvme_attach_controller" 00:16:34.996 },{ 00:16:34.996 "params": { 00:16:34.996 "name": "Nvme9", 00:16:34.996 "trtype": "rdma", 00:16:34.996 "traddr": "192.168.100.8", 00:16:34.996 "adrfam": "ipv4", 00:16:34.996 "trsvcid": "4420", 00:16:34.996 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:34.996 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:34.996 "hdgst": false, 00:16:34.996 "ddgst": false 00:16:34.996 }, 00:16:34.996 "method": "bdev_nvme_attach_controller" 00:16:34.996 },{ 00:16:34.996 "params": { 00:16:34.996 "name": "Nvme10", 00:16:34.996 "trtype": "rdma", 00:16:34.996 "traddr": "192.168.100.8", 00:16:34.996 "adrfam": "ipv4", 00:16:34.996 "trsvcid": "4420", 00:16:34.996 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:34.996 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:34.996 "hdgst": false, 00:16:34.996 "ddgst": false 00:16:34.996 }, 00:16:34.996 "method": "bdev_nvme_attach_controller" 00:16:34.996 }' 00:16:34.996 [2024-04-18 17:33:01.293054] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:16:34.996 [2024-04-18 17:33:01.293142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2030972 ] 00:16:34.996 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.996 [2024-04-18 17:33:01.357983] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.996 [2024-04-18 17:33:01.466209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.930 Running I/O for 10 seconds... 00:16:35.930 17:33:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:35.930 17:33:02 -- common/autotest_common.sh@850 -- # return 0 00:16:35.930 17:33:02 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:35.930 17:33:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.930 17:33:02 -- common/autotest_common.sh@10 -- # set +x 00:16:36.187 17:33:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.187 17:33:02 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:36.187 17:33:02 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:36.187 17:33:02 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:16:36.187 17:33:02 -- target/shutdown.sh@57 -- # local ret=1 00:16:36.187 17:33:02 -- target/shutdown.sh@58 -- # local i 00:16:36.187 17:33:02 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:16:36.187 17:33:02 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:36.187 17:33:02 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:36.187 17:33:02 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:36.187 17:33:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.187 17:33:02 -- common/autotest_common.sh@10 -- # set +x 00:16:36.188 17:33:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.188 17:33:02 -- target/shutdown.sh@60 -- # read_io_count=3 00:16:36.188 17:33:02 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:16:36.188 17:33:02 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:36.445 17:33:02 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:36.445 17:33:02 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:36.445 17:33:02 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:36.445 17:33:02 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:36.445 17:33:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.445 17:33:02 -- common/autotest_common.sh@10 -- # set +x 00:16:36.705 17:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.705 17:33:03 -- target/shutdown.sh@60 -- # read_io_count=147 00:16:36.705 17:33:03 -- target/shutdown.sh@63 -- # '[' 147 -ge 100 ']' 00:16:36.705 17:33:03 -- target/shutdown.sh@64 -- # ret=0 00:16:36.705 17:33:03 -- target/shutdown.sh@65 -- # break 00:16:36.705 17:33:03 -- target/shutdown.sh@69 -- # return 0 00:16:36.705 17:33:03 -- target/shutdown.sh@110 -- # killprocess 2030972 00:16:36.705 17:33:03 -- common/autotest_common.sh@936 -- # '[' -z 2030972 ']' 00:16:36.705 17:33:03 -- common/autotest_common.sh@940 -- # kill -0 2030972 00:16:36.705 17:33:03 -- common/autotest_common.sh@941 -- # uname 00:16:36.705 17:33:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:36.705 17:33:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2030972 00:16:36.705 17:33:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:36.705 17:33:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:36.705 17:33:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2030972' 00:16:36.705 killing process with pid 2030972 00:16:36.705 17:33:03 -- common/autotest_common.sh@955 -- # kill 2030972 00:16:36.705 17:33:03 -- common/autotest_common.sh@960 -- # wait 2030972 00:16:36.965 Received shutdown signal, test time was about 0.954482 seconds 00:16:36.965 00:16:36.965 Latency(us) 00:16:36.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.965 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:36.965 Verification LBA range: start 0x0 length 0x400 00:16:36.965 Nvme1n1 : 0.94 290.72 18.17 0.00 0.00 214861.17 10534.31 214375.54 00:16:36.965 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:36.965 Verification LBA range: start 0x0 length 0x400 00:16:36.965 Nvme2n1 : 0.94 294.57 18.41 0.00 0.00 208132.91 10388.67 203501.42 00:16:36.965 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:36.965 Verification LBA range: start 0x0 length 0x400 00:16:36.965 Nvme3n1 : 0.94 312.19 19.51 0.00 0.00 192931.24 4781.70 194957.46 00:16:36.965 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:36.965 Verification LBA range: start 0x0 length 0x400 00:16:36.965 Nvme4n1 : 0.94 340.29 21.27 0.00 0.00 173848.05 6796.33 160004.93 00:16:36.965 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:36.965 Verification LBA range: start 0x0 length 0x400 00:16:36.965 Nvme5n1 : 0.94 320.49 20.03 0.00 0.00 181083.67 11942.12 173209.22 00:16:36.965 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:36.965 Verification LBA range: start 0x0 length 0x400 00:16:36.965 Nvme6n1 : 0.94 338.77 21.17 0.00 0.00 168526.13 13107.20 139810.13 00:16:36.965 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:36.965 Verification LBA range: start 0x0 length 0x400 00:16:36.965 Nvme7n1 : 0.95 338.04 21.13 0.00 0.00 164897.75 12379.02 125052.40 00:16:36.965 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:36.965 Verification LBA range: start 0x0 length 0x400 00:16:36.965 Nvme8n1 : 0.95 337.22 21.08 0.00 0.00 161948.56 15243.19 111848.11 00:16:36.965 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:36.965 Verification LBA range: start 0x0 length 0x400 00:16:36.965 Nvme9n1 : 0.95 336.42 21.03 0.00 0.00 159160.55 16602.45 119615.34 00:16:36.965 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:36.965 Verification LBA range: start 0x0 length 0x400 00:16:36.965 Nvme10n1 : 0.95 268.51 16.78 0.00 0.00 194836.95 11650.84 228356.55 00:16:36.965 =================================================================================================================== 00:16:36.965 Total : 3177.22 198.58 0.00 0.00 180747.99 4781.70 228356.55 00:16:37.223 17:33:03 -- target/shutdown.sh@113 -- # sleep 1 00:16:38.158 17:33:04 -- target/shutdown.sh@114 -- # kill -0 2030696 00:16:38.158 17:33:04 -- target/shutdown.sh@116 -- # stoptarget 00:16:38.158 17:33:04 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:38.159 17:33:04 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:38.159 17:33:04 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:38.159 17:33:04 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:38.159 17:33:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:38.159 17:33:04 -- nvmf/common.sh@117 -- # sync 00:16:38.417 17:33:04 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:38.417 17:33:04 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:38.417 17:33:04 -- nvmf/common.sh@120 -- # set +e 00:16:38.417 17:33:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:38.417 17:33:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:38.417 rmmod nvme_rdma 00:16:38.417 rmmod nvme_fabrics 00:16:38.417 17:33:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:38.417 17:33:04 -- nvmf/common.sh@124 -- # set -e 00:16:38.417 17:33:04 -- nvmf/common.sh@125 -- # return 0 00:16:38.417 17:33:04 -- nvmf/common.sh@478 -- # '[' -n 2030696 ']' 00:16:38.417 17:33:04 -- nvmf/common.sh@479 -- # killprocess 2030696 00:16:38.417 17:33:04 -- common/autotest_common.sh@936 -- # '[' -z 2030696 ']' 00:16:38.417 17:33:04 -- common/autotest_common.sh@940 -- # kill -0 2030696 00:16:38.417 17:33:04 -- common/autotest_common.sh@941 -- # uname 00:16:38.417 17:33:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:38.417 17:33:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2030696 00:16:38.417 17:33:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:38.417 17:33:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:38.417 17:33:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2030696' 00:16:38.417 killing process with pid 2030696 00:16:38.417 17:33:04 -- common/autotest_common.sh@955 -- # kill 2030696 00:16:38.417 17:33:04 -- common/autotest_common.sh@960 -- # wait 2030696 00:16:38.983 17:33:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:38.983 17:33:05 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:16:38.983 00:16:38.983 real 0m5.997s 00:16:38.983 user 0m24.461s 00:16:38.983 sys 0m1.010s 00:16:38.983 17:33:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:38.983 17:33:05 -- common/autotest_common.sh@10 -- # set +x 00:16:38.983 ************************************ 00:16:38.983 END TEST nvmf_shutdown_tc2 00:16:38.983 ************************************ 00:16:38.983 17:33:05 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:16:38.983 17:33:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:38.983 17:33:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:38.983 17:33:05 -- common/autotest_common.sh@10 -- # set +x 00:16:38.983 ************************************ 00:16:38.983 START TEST nvmf_shutdown_tc3 00:16:38.983 ************************************ 00:16:38.983 17:33:05 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:16:38.983 17:33:05 -- target/shutdown.sh@121 -- # starttarget 00:16:38.983 17:33:05 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:38.983 17:33:05 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:16:38.983 17:33:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.983 17:33:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:38.983 17:33:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:38.983 17:33:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:38.983 17:33:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.983 17:33:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.983 17:33:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.983 17:33:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:38.983 17:33:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:38.983 17:33:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:38.983 17:33:05 -- common/autotest_common.sh@10 -- # set +x 00:16:38.983 17:33:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:38.983 17:33:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:38.983 17:33:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:38.983 17:33:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:38.983 17:33:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:38.983 17:33:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:38.983 17:33:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:38.983 17:33:05 -- nvmf/common.sh@295 -- # net_devs=() 00:16:38.983 17:33:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:38.983 17:33:05 -- nvmf/common.sh@296 -- # e810=() 00:16:38.983 17:33:05 -- nvmf/common.sh@296 -- # local -ga e810 00:16:38.983 17:33:05 -- nvmf/common.sh@297 -- # x722=() 00:16:38.983 17:33:05 -- nvmf/common.sh@297 -- # local -ga x722 00:16:38.983 17:33:05 -- nvmf/common.sh@298 -- # mlx=() 00:16:38.983 17:33:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:38.983 17:33:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:38.983 17:33:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:38.983 17:33:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:38.983 17:33:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:38.983 17:33:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:38.983 17:33:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:38.983 17:33:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:38.983 17:33:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:38.983 17:33:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:38.983 17:33:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:38.983 17:33:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:38.983 17:33:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:38.983 17:33:05 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:38.983 17:33:05 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:38.983 17:33:05 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:38.983 17:33:05 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:38.983 17:33:05 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:38.983 17:33:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:38.984 17:33:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:38.984 17:33:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:16:38.984 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:16:38.984 17:33:05 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:38.984 17:33:05 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:38.984 17:33:05 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:38.984 17:33:05 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:38.984 17:33:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:38.984 17:33:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:16:38.984 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:16:38.984 17:33:05 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:38.984 17:33:05 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:38.984 17:33:05 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:38.984 17:33:05 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:38.984 17:33:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:38.984 17:33:05 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:38.984 17:33:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:38.984 17:33:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.984 17:33:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:38.984 17:33:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.984 17:33:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:16:38.984 Found net devices under 0000:09:00.0: mlx_0_0 00:16:38.984 17:33:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.984 17:33:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:38.984 17:33:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:38.984 17:33:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:38.984 17:33:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:38.984 17:33:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:16:38.984 Found net devices under 0000:09:00.1: mlx_0_1 00:16:38.984 17:33:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:38.984 17:33:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:38.984 17:33:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:38.984 17:33:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:38.984 17:33:05 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:16:38.984 17:33:05 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:16:38.984 17:33:05 -- nvmf/common.sh@409 -- # rdma_device_init 00:16:38.984 17:33:05 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:16:38.984 17:33:05 -- nvmf/common.sh@58 -- # uname 00:16:38.984 17:33:05 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:38.984 17:33:05 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:38.984 17:33:05 -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:38.984 17:33:05 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:39.241 17:33:05 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:39.241 17:33:05 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:39.241 17:33:05 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:39.241 17:33:05 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:39.241 17:33:05 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:16:39.241 17:33:05 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:39.241 17:33:05 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:39.241 17:33:05 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:39.241 17:33:05 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:39.241 17:33:05 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:39.241 17:33:05 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:39.241 17:33:05 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:39.241 17:33:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:39.241 17:33:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:39.241 17:33:05 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:39.241 17:33:05 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:39.241 17:33:05 -- nvmf/common.sh@105 -- # continue 2 00:16:39.241 17:33:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:39.241 17:33:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:39.241 17:33:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:39.241 17:33:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:39.241 17:33:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:39.241 17:33:05 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:39.241 17:33:05 -- nvmf/common.sh@105 -- # continue 2 00:16:39.241 17:33:05 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:39.241 17:33:05 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:39.241 17:33:05 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:39.242 17:33:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:39.242 17:33:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:39.242 17:33:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:39.242 17:33:05 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:39.242 17:33:05 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:39.242 17:33:05 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:39.242 82: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:39.242 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:16:39.242 altname enp9s0f0np0 00:16:39.242 inet 192.168.100.8/24 scope global mlx_0_0 00:16:39.242 valid_lft forever preferred_lft forever 00:16:39.242 17:33:05 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:39.242 17:33:05 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:39.242 17:33:05 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:39.242 17:33:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:39.242 17:33:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:39.242 17:33:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:39.242 17:33:05 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:39.242 17:33:05 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:39.242 17:33:05 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:39.242 83: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:39.242 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:16:39.242 altname enp9s0f1np1 00:16:39.242 inet 192.168.100.9/24 scope global mlx_0_1 00:16:39.242 valid_lft forever preferred_lft forever 00:16:39.242 17:33:05 -- nvmf/common.sh@411 -- # return 0 00:16:39.242 17:33:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:39.242 17:33:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:39.242 17:33:05 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:16:39.242 17:33:05 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:16:39.242 17:33:05 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:39.242 17:33:05 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:39.242 17:33:05 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:39.242 17:33:05 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:39.242 17:33:05 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:39.242 17:33:05 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:39.242 17:33:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:39.242 17:33:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:39.242 17:33:05 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:39.242 17:33:05 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:39.242 17:33:05 -- nvmf/common.sh@105 -- # continue 2 00:16:39.242 17:33:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:39.242 17:33:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:39.242 17:33:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:39.242 17:33:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:39.242 17:33:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:39.242 17:33:05 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:39.242 17:33:05 -- nvmf/common.sh@105 -- # continue 2 00:16:39.242 17:33:05 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:39.242 17:33:05 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:39.242 17:33:05 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:39.242 17:33:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:39.242 17:33:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:39.242 17:33:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:39.242 17:33:05 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:39.242 17:33:05 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:39.242 17:33:05 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:39.242 17:33:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:39.242 17:33:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:39.242 17:33:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:39.242 17:33:05 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:16:39.242 192.168.100.9' 00:16:39.242 17:33:05 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:39.242 192.168.100.9' 00:16:39.242 17:33:05 -- nvmf/common.sh@446 -- # head -n 1 00:16:39.242 17:33:05 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:39.242 17:33:05 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:16:39.242 192.168.100.9' 00:16:39.242 17:33:05 -- nvmf/common.sh@447 -- # tail -n +2 00:16:39.242 17:33:05 -- nvmf/common.sh@447 -- # head -n 1 00:16:39.242 17:33:05 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:39.242 17:33:05 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:16:39.242 17:33:05 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:39.242 17:33:05 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:16:39.242 17:33:05 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:16:39.242 17:33:05 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:16:39.242 17:33:05 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:39.242 17:33:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:39.242 17:33:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:39.242 17:33:05 -- common/autotest_common.sh@10 -- # set +x 00:16:39.242 17:33:05 -- nvmf/common.sh@470 -- # nvmfpid=2031658 00:16:39.242 17:33:05 -- nvmf/common.sh@471 -- # waitforlisten 2031658 00:16:39.242 17:33:05 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:39.242 17:33:05 -- common/autotest_common.sh@817 -- # '[' -z 2031658 ']' 00:16:39.242 17:33:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.242 17:33:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:39.242 17:33:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.242 17:33:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:39.242 17:33:05 -- common/autotest_common.sh@10 -- # set +x 00:16:39.242 [2024-04-18 17:33:05.656929] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:16:39.242 [2024-04-18 17:33:05.657004] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.242 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.242 [2024-04-18 17:33:05.719624] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.501 [2024-04-18 17:33:05.838220] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.501 [2024-04-18 17:33:05.838281] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.501 [2024-04-18 17:33:05.838303] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.501 [2024-04-18 17:33:05.838316] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.501 [2024-04-18 17:33:05.838327] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.501 [2024-04-18 17:33:05.838417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.501 [2024-04-18 17:33:05.838505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.501 [2024-04-18 17:33:05.838572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:39.501 [2024-04-18 17:33:05.838576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.501 17:33:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:39.502 17:33:05 -- common/autotest_common.sh@850 -- # return 0 00:16:39.502 17:33:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:39.502 17:33:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:39.502 17:33:05 -- common/autotest_common.sh@10 -- # set +x 00:16:39.502 17:33:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.502 17:33:05 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:39.502 17:33:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.502 17:33:05 -- common/autotest_common.sh@10 -- # set +x 00:16:39.502 [2024-04-18 17:33:06.012773] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2111e40/0x2116330) succeed. 00:16:39.502 [2024-04-18 17:33:06.023760] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2113430/0x21579c0) succeed. 00:16:39.761 17:33:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.761 17:33:06 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:39.761 17:33:06 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:39.761 17:33:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:39.761 17:33:06 -- common/autotest_common.sh@10 -- # set +x 00:16:39.761 17:33:06 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:39.761 17:33:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:39.761 17:33:06 -- target/shutdown.sh@28 -- # cat 00:16:39.761 17:33:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:39.761 17:33:06 -- target/shutdown.sh@28 -- # cat 00:16:39.761 17:33:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:39.761 17:33:06 -- target/shutdown.sh@28 -- # cat 00:16:39.761 17:33:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:39.761 17:33:06 -- target/shutdown.sh@28 -- # cat 00:16:39.761 17:33:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:39.761 17:33:06 -- target/shutdown.sh@28 -- # cat 00:16:39.761 17:33:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:39.761 17:33:06 -- target/shutdown.sh@28 -- # cat 00:16:39.761 17:33:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:39.761 17:33:06 -- target/shutdown.sh@28 -- # cat 00:16:39.761 17:33:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:39.761 17:33:06 -- target/shutdown.sh@28 -- # cat 00:16:39.761 17:33:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:39.761 17:33:06 -- target/shutdown.sh@28 -- # cat 00:16:39.761 17:33:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:39.761 17:33:06 -- target/shutdown.sh@28 -- # cat 00:16:39.761 17:33:06 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:39.761 17:33:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.761 17:33:06 -- common/autotest_common.sh@10 -- # set +x 00:16:39.761 Malloc1 00:16:39.761 [2024-04-18 17:33:06.249590] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:39.761 Malloc2 00:16:40.021 Malloc3 00:16:40.021 Malloc4 00:16:40.021 Malloc5 00:16:40.021 Malloc6 00:16:40.021 Malloc7 00:16:40.280 Malloc8 00:16:40.280 Malloc9 00:16:40.280 Malloc10 00:16:40.280 17:33:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.280 17:33:06 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:40.280 17:33:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:40.280 17:33:06 -- common/autotest_common.sh@10 -- # set +x 00:16:40.280 17:33:06 -- target/shutdown.sh@125 -- # perfpid=2031835 00:16:40.280 17:33:06 -- target/shutdown.sh@126 -- # waitforlisten 2031835 /var/tmp/bdevperf.sock 00:16:40.280 17:33:06 -- common/autotest_common.sh@817 -- # '[' -z 2031835 ']' 00:16:40.280 17:33:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:40.280 17:33:06 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:40.280 17:33:06 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:40.280 17:33:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:40.280 17:33:06 -- nvmf/common.sh@521 -- # config=() 00:16:40.280 17:33:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:40.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:40.280 17:33:06 -- nvmf/common.sh@521 -- # local subsystem config 00:16:40.280 17:33:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:40.280 17:33:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:40.280 17:33:06 -- common/autotest_common.sh@10 -- # set +x 00:16:40.280 17:33:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:40.280 { 00:16:40.280 "params": { 00:16:40.280 "name": "Nvme$subsystem", 00:16:40.280 "trtype": "$TEST_TRANSPORT", 00:16:40.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.280 "adrfam": "ipv4", 00:16:40.280 "trsvcid": "$NVMF_PORT", 00:16:40.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.280 "hdgst": ${hdgst:-false}, 00:16:40.280 "ddgst": ${ddgst:-false} 00:16:40.280 }, 00:16:40.280 "method": "bdev_nvme_attach_controller" 00:16:40.280 } 00:16:40.280 EOF 00:16:40.280 )") 00:16:40.280 17:33:06 -- nvmf/common.sh@543 -- # cat 00:16:40.280 17:33:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:40.280 17:33:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:40.280 { 00:16:40.280 "params": { 00:16:40.280 "name": "Nvme$subsystem", 00:16:40.280 "trtype": "$TEST_TRANSPORT", 00:16:40.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.280 "adrfam": "ipv4", 00:16:40.280 "trsvcid": "$NVMF_PORT", 00:16:40.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.280 "hdgst": ${hdgst:-false}, 00:16:40.280 "ddgst": ${ddgst:-false} 00:16:40.280 }, 00:16:40.280 "method": "bdev_nvme_attach_controller" 00:16:40.280 } 00:16:40.280 EOF 00:16:40.280 )") 00:16:40.280 17:33:06 -- nvmf/common.sh@543 -- # cat 00:16:40.280 17:33:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:40.280 17:33:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:40.280 { 00:16:40.280 "params": { 00:16:40.280 "name": "Nvme$subsystem", 00:16:40.280 "trtype": "$TEST_TRANSPORT", 00:16:40.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.280 "adrfam": "ipv4", 00:16:40.280 "trsvcid": "$NVMF_PORT", 00:16:40.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.280 "hdgst": ${hdgst:-false}, 00:16:40.280 "ddgst": ${ddgst:-false} 00:16:40.280 }, 00:16:40.280 "method": "bdev_nvme_attach_controller" 00:16:40.280 } 00:16:40.280 EOF 00:16:40.280 )") 00:16:40.280 17:33:06 -- nvmf/common.sh@543 -- # cat 00:16:40.280 17:33:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:40.280 17:33:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:40.280 { 00:16:40.280 "params": { 00:16:40.280 "name": "Nvme$subsystem", 00:16:40.280 "trtype": "$TEST_TRANSPORT", 00:16:40.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.280 "adrfam": "ipv4", 00:16:40.280 "trsvcid": "$NVMF_PORT", 00:16:40.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.281 "hdgst": ${hdgst:-false}, 00:16:40.281 "ddgst": ${ddgst:-false} 00:16:40.281 }, 00:16:40.281 "method": "bdev_nvme_attach_controller" 00:16:40.281 } 00:16:40.281 EOF 00:16:40.281 )") 00:16:40.281 17:33:06 -- nvmf/common.sh@543 -- # cat 00:16:40.281 17:33:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:40.281 17:33:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:40.281 { 00:16:40.281 "params": { 00:16:40.281 "name": "Nvme$subsystem", 00:16:40.281 "trtype": "$TEST_TRANSPORT", 00:16:40.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.281 "adrfam": "ipv4", 00:16:40.281 "trsvcid": "$NVMF_PORT", 00:16:40.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.281 "hdgst": ${hdgst:-false}, 00:16:40.281 "ddgst": ${ddgst:-false} 00:16:40.281 }, 00:16:40.281 "method": "bdev_nvme_attach_controller" 00:16:40.281 } 00:16:40.281 EOF 00:16:40.281 )") 00:16:40.281 17:33:06 -- nvmf/common.sh@543 -- # cat 00:16:40.281 17:33:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:40.281 17:33:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:40.281 { 00:16:40.281 "params": { 00:16:40.281 "name": "Nvme$subsystem", 00:16:40.281 "trtype": "$TEST_TRANSPORT", 00:16:40.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.281 "adrfam": "ipv4", 00:16:40.281 "trsvcid": "$NVMF_PORT", 00:16:40.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.281 "hdgst": ${hdgst:-false}, 00:16:40.281 "ddgst": ${ddgst:-false} 00:16:40.281 }, 00:16:40.281 "method": "bdev_nvme_attach_controller" 00:16:40.281 } 00:16:40.281 EOF 00:16:40.281 )") 00:16:40.281 17:33:06 -- nvmf/common.sh@543 -- # cat 00:16:40.281 17:33:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:40.281 17:33:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:40.281 { 00:16:40.281 "params": { 00:16:40.281 "name": "Nvme$subsystem", 00:16:40.281 "trtype": "$TEST_TRANSPORT", 00:16:40.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.281 "adrfam": "ipv4", 00:16:40.281 "trsvcid": "$NVMF_PORT", 00:16:40.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.281 "hdgst": ${hdgst:-false}, 00:16:40.281 "ddgst": ${ddgst:-false} 00:16:40.281 }, 00:16:40.281 "method": "bdev_nvme_attach_controller" 00:16:40.281 } 00:16:40.281 EOF 00:16:40.281 )") 00:16:40.281 17:33:06 -- nvmf/common.sh@543 -- # cat 00:16:40.281 17:33:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:40.281 17:33:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:40.281 { 00:16:40.281 "params": { 00:16:40.281 "name": "Nvme$subsystem", 00:16:40.281 "trtype": "$TEST_TRANSPORT", 00:16:40.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.281 "adrfam": "ipv4", 00:16:40.281 "trsvcid": "$NVMF_PORT", 00:16:40.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.281 "hdgst": ${hdgst:-false}, 00:16:40.281 "ddgst": ${ddgst:-false} 00:16:40.281 }, 00:16:40.281 "method": "bdev_nvme_attach_controller" 00:16:40.281 } 00:16:40.281 EOF 00:16:40.281 )") 00:16:40.281 17:33:06 -- nvmf/common.sh@543 -- # cat 00:16:40.281 17:33:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:40.281 17:33:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:40.281 { 00:16:40.281 "params": { 00:16:40.281 "name": "Nvme$subsystem", 00:16:40.281 "trtype": "$TEST_TRANSPORT", 00:16:40.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.281 "adrfam": "ipv4", 00:16:40.281 "trsvcid": "$NVMF_PORT", 00:16:40.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.281 "hdgst": ${hdgst:-false}, 00:16:40.281 "ddgst": ${ddgst:-false} 00:16:40.281 }, 00:16:40.281 "method": "bdev_nvme_attach_controller" 00:16:40.281 } 00:16:40.281 EOF 00:16:40.281 )") 00:16:40.281 17:33:06 -- nvmf/common.sh@543 -- # cat 00:16:40.281 17:33:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:40.281 17:33:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:40.281 { 00:16:40.281 "params": { 00:16:40.281 "name": "Nvme$subsystem", 00:16:40.281 "trtype": "$TEST_TRANSPORT", 00:16:40.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.281 "adrfam": "ipv4", 00:16:40.281 "trsvcid": "$NVMF_PORT", 00:16:40.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.281 "hdgst": ${hdgst:-false}, 00:16:40.281 "ddgst": ${ddgst:-false} 00:16:40.281 }, 00:16:40.281 "method": "bdev_nvme_attach_controller" 00:16:40.281 } 00:16:40.281 EOF 00:16:40.281 )") 00:16:40.281 17:33:06 -- nvmf/common.sh@543 -- # cat 00:16:40.281 17:33:06 -- nvmf/common.sh@545 -- # jq . 00:16:40.281 17:33:06 -- nvmf/common.sh@546 -- # IFS=, 00:16:40.281 17:33:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:40.281 "params": { 00:16:40.281 "name": "Nvme1", 00:16:40.281 "trtype": "rdma", 00:16:40.281 "traddr": "192.168.100.8", 00:16:40.281 "adrfam": "ipv4", 00:16:40.281 "trsvcid": "4420", 00:16:40.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:40.281 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:40.281 "hdgst": false, 00:16:40.281 "ddgst": false 00:16:40.281 }, 00:16:40.281 "method": "bdev_nvme_attach_controller" 00:16:40.281 },{ 00:16:40.281 "params": { 00:16:40.281 "name": "Nvme2", 00:16:40.281 "trtype": "rdma", 00:16:40.281 "traddr": "192.168.100.8", 00:16:40.281 "adrfam": "ipv4", 00:16:40.281 "trsvcid": "4420", 00:16:40.281 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:40.281 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:40.281 "hdgst": false, 00:16:40.281 "ddgst": false 00:16:40.281 }, 00:16:40.281 "method": "bdev_nvme_attach_controller" 00:16:40.281 },{ 00:16:40.281 "params": { 00:16:40.281 "name": "Nvme3", 00:16:40.281 "trtype": "rdma", 00:16:40.281 "traddr": "192.168.100.8", 00:16:40.281 "adrfam": "ipv4", 00:16:40.281 "trsvcid": "4420", 00:16:40.281 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:40.281 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:40.281 "hdgst": false, 00:16:40.281 "ddgst": false 00:16:40.281 }, 00:16:40.281 "method": "bdev_nvme_attach_controller" 00:16:40.281 },{ 00:16:40.281 "params": { 00:16:40.281 "name": "Nvme4", 00:16:40.281 "trtype": "rdma", 00:16:40.281 "traddr": "192.168.100.8", 00:16:40.281 "adrfam": "ipv4", 00:16:40.281 "trsvcid": "4420", 00:16:40.281 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:40.281 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:40.281 "hdgst": false, 00:16:40.281 "ddgst": false 00:16:40.281 }, 00:16:40.281 "method": "bdev_nvme_attach_controller" 00:16:40.281 },{ 00:16:40.281 "params": { 00:16:40.281 "name": "Nvme5", 00:16:40.281 "trtype": "rdma", 00:16:40.281 "traddr": "192.168.100.8", 00:16:40.281 "adrfam": "ipv4", 00:16:40.281 "trsvcid": "4420", 00:16:40.281 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:40.281 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:40.281 "hdgst": false, 00:16:40.281 "ddgst": false 00:16:40.281 }, 00:16:40.281 "method": "bdev_nvme_attach_controller" 00:16:40.281 },{ 00:16:40.281 "params": { 00:16:40.281 "name": "Nvme6", 00:16:40.281 "trtype": "rdma", 00:16:40.281 "traddr": "192.168.100.8", 00:16:40.281 "adrfam": "ipv4", 00:16:40.281 "trsvcid": "4420", 00:16:40.281 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:40.281 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:40.281 "hdgst": false, 00:16:40.281 "ddgst": false 00:16:40.281 }, 00:16:40.281 "method": "bdev_nvme_attach_controller" 00:16:40.281 },{ 00:16:40.281 "params": { 00:16:40.281 "name": "Nvme7", 00:16:40.281 "trtype": "rdma", 00:16:40.281 "traddr": "192.168.100.8", 00:16:40.281 "adrfam": "ipv4", 00:16:40.281 "trsvcid": "4420", 00:16:40.281 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:40.281 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:40.281 "hdgst": false, 00:16:40.281 "ddgst": false 00:16:40.281 }, 00:16:40.281 "method": "bdev_nvme_attach_controller" 00:16:40.281 },{ 00:16:40.281 "params": { 00:16:40.281 "name": "Nvme8", 00:16:40.281 "trtype": "rdma", 00:16:40.281 "traddr": "192.168.100.8", 00:16:40.281 "adrfam": "ipv4", 00:16:40.281 "trsvcid": "4420", 00:16:40.281 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:40.281 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:40.281 "hdgst": false, 00:16:40.281 "ddgst": false 00:16:40.281 }, 00:16:40.281 "method": "bdev_nvme_attach_controller" 00:16:40.281 },{ 00:16:40.281 "params": { 00:16:40.281 "name": "Nvme9", 00:16:40.281 "trtype": "rdma", 00:16:40.281 "traddr": "192.168.100.8", 00:16:40.281 "adrfam": "ipv4", 00:16:40.281 "trsvcid": "4420", 00:16:40.281 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:40.281 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:40.281 "hdgst": false, 00:16:40.281 "ddgst": false 00:16:40.281 }, 00:16:40.281 "method": "bdev_nvme_attach_controller" 00:16:40.281 },{ 00:16:40.281 "params": { 00:16:40.281 "name": "Nvme10", 00:16:40.282 "trtype": "rdma", 00:16:40.282 "traddr": "192.168.100.8", 00:16:40.282 "adrfam": "ipv4", 00:16:40.282 "trsvcid": "4420", 00:16:40.282 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:40.282 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:40.282 "hdgst": false, 00:16:40.282 "ddgst": false 00:16:40.282 }, 00:16:40.282 "method": "bdev_nvme_attach_controller" 00:16:40.282 }' 00:16:40.282 [2024-04-18 17:33:06.766923] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:16:40.282 [2024-04-18 17:33:06.767012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2031835 ] 00:16:40.282 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.539 [2024-04-18 17:33:06.831437] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.539 [2024-04-18 17:33:06.939840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.507 Running I/O for 10 seconds... 00:16:41.507 17:33:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:41.507 17:33:07 -- common/autotest_common.sh@850 -- # return 0 00:16:41.507 17:33:07 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:41.507 17:33:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.507 17:33:07 -- common/autotest_common.sh@10 -- # set +x 00:16:41.507 17:33:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.507 17:33:08 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:41.507 17:33:08 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:41.507 17:33:08 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:41.507 17:33:08 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:16:41.507 17:33:08 -- target/shutdown.sh@57 -- # local ret=1 00:16:41.507 17:33:08 -- target/shutdown.sh@58 -- # local i 00:16:41.507 17:33:08 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:16:41.507 17:33:08 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:41.507 17:33:08 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:41.507 17:33:08 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:41.507 17:33:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.507 17:33:08 -- common/autotest_common.sh@10 -- # set +x 00:16:41.766 17:33:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.766 17:33:08 -- target/shutdown.sh@60 -- # read_io_count=3 00:16:41.766 17:33:08 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:16:41.766 17:33:08 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:42.024 17:33:08 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:42.024 17:33:08 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:42.024 17:33:08 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:42.024 17:33:08 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:42.024 17:33:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.024 17:33:08 -- common/autotest_common.sh@10 -- # set +x 00:16:42.024 17:33:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.024 17:33:08 -- target/shutdown.sh@60 -- # read_io_count=131 00:16:42.024 17:33:08 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:16:42.024 17:33:08 -- target/shutdown.sh@64 -- # ret=0 00:16:42.024 17:33:08 -- target/shutdown.sh@65 -- # break 00:16:42.024 17:33:08 -- target/shutdown.sh@69 -- # return 0 00:16:42.024 17:33:08 -- target/shutdown.sh@135 -- # killprocess 2031658 00:16:42.024 17:33:08 -- common/autotest_common.sh@936 -- # '[' -z 2031658 ']' 00:16:42.024 17:33:08 -- common/autotest_common.sh@940 -- # kill -0 2031658 00:16:42.024 17:33:08 -- common/autotest_common.sh@941 -- # uname 00:16:42.024 17:33:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.024 17:33:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2031658 00:16:42.282 17:33:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:42.282 17:33:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:42.282 17:33:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2031658' 00:16:42.282 killing process with pid 2031658 00:16:42.282 17:33:08 -- common/autotest_common.sh@955 -- # kill 2031658 00:16:42.282 17:33:08 -- common/autotest_common.sh@960 -- # wait 2031658 00:16:42.848 17:33:09 -- target/shutdown.sh@136 -- # nvmfpid= 00:16:42.848 17:33:09 -- target/shutdown.sh@139 -- # sleep 1 00:16:43.107 [2024-04-18 17:33:09.629479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.107 [2024-04-18 17:33:09.629543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:16:43.107 [2024-04-18 17:33:09.629565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.107 [2024-04-18 17:33:09.629581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:16:43.107 [2024-04-18 17:33:09.629596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.107 [2024-04-18 17:33:09.629612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:16:43.107 [2024-04-18 17:33:09.629627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.107 [2024-04-18 17:33:09.629642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:16:43.107 [2024-04-18 17:33:09.631445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:43.107 [2024-04-18 17:33:09.631475] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:43.107 [2024-04-18 17:33:09.631506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.107 [2024-04-18 17:33:09.631526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.107 [2024-04-18 17:33:09.631542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.107 [2024-04-18 17:33:09.631558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.107 [2024-04-18 17:33:09.631573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.107 [2024-04-18 17:33:09.631589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.107 [2024-04-18 17:33:09.631604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.107 [2024-04-18 17:33:09.631619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.107 [2024-04-18 17:33:09.633309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:43.107 [2024-04-18 17:33:09.633343] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:16:43.107 [2024-04-18 17:33:09.633371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.107 [2024-04-18 17:33:09.633400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.107 [2024-04-18 17:33:09.633418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.107 [2024-04-18 17:33:09.633433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.107 [2024-04-18 17:33:09.633448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.107 [2024-04-18 17:33:09.633462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.107 [2024-04-18 17:33:09.633478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.107 [2024-04-18 17:33:09.633492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.107 [2024-04-18 17:33:09.635208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:43.107 [2024-04-18 17:33:09.635234] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:16:43.107 [2024-04-18 17:33:09.635262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.107 [2024-04-18 17:33:09.635282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.635298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.635313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.635329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.635344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.635359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.635374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.637202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:43.108 [2024-04-18 17:33:09.637238] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:16:43.108 [2024-04-18 17:33:09.637269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.637290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.637306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.637321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.637337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.637359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.637375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.637404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.639090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:43.108 [2024-04-18 17:33:09.639136] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:16:43.108 [2024-04-18 17:33:09.639168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.639189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.639206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.639222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.639237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.639252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.639268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.639282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.640829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:43.108 [2024-04-18 17:33:09.640857] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:16:43.108 [2024-04-18 17:33:09.640884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.640904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.640920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.640935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.640951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.640966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.640981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.640996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.642422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:43.108 [2024-04-18 17:33:09.642449] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:16:43.108 [2024-04-18 17:33:09.642477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.642503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.642520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.642535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.642551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.642566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.642581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.642596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.644051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:43.108 [2024-04-18 17:33:09.644093] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:43.108 [2024-04-18 17:33:09.644124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.644144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.644161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.644178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.644194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.644209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.644224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.644239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.645686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:43.108 [2024-04-18 17:33:09.645718] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:16:43.108 [2024-04-18 17:33:09.645746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.645766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.645782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.645797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.645814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.645829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.108 [2024-04-18 17:33:09.645851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.108 [2024-04-18 17:33:09.645867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62819 cdw0:0 sqhd:9100 p:0 m:1 dnr:0 00:16:43.376 [2024-04-18 17:33:09.647637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:43.376 [2024-04-18 17:33:09.647684] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:16:43.377 [2024-04-18 17:33:09.649356] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256f00 was disconnected and freed. reset controller. 00:16:43.377 [2024-04-18 17:33:09.649392] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.377 [2024-04-18 17:33:09.650697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e3eb40 len:0x10000 key:0x187500 00:16:43.377 [2024-04-18 17:33:09.650725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.650757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e2eac0 len:0x10000 key:0x187500 00:16:43.377 [2024-04-18 17:33:09.650776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.650799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e1ea40 len:0x10000 key:0x187500 00:16:43.377 [2024-04-18 17:33:09.650817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.650838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e0e9c0 len:0x10000 key:0x187500 00:16:43.377 [2024-04-18 17:33:09.650856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.650879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002c7740 len:0x10000 key:0x188a00 00:16:43.377 [2024-04-18 17:33:09.650896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.650917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002b76c0 len:0x10000 key:0x188a00 00:16:43.377 [2024-04-18 17:33:09.650935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.650957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002a7640 len:0x10000 key:0x188a00 00:16:43.377 [2024-04-18 17:33:09.650975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.650997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002975c0 len:0x10000 key:0x188a00 00:16:43.377 [2024-04-18 17:33:09.651015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000287540 len:0x10000 key:0x188a00 00:16:43.377 [2024-04-18 17:33:09.651054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002774c0 len:0x10000 key:0x188a00 00:16:43.377 [2024-04-18 17:33:09.651102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000267440 len:0x10000 key:0x188a00 00:16:43.377 [2024-04-18 17:33:09.651142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002573c0 len:0x10000 key:0x188a00 00:16:43.377 [2024-04-18 17:33:09.651181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000247340 len:0x10000 key:0x188a00 00:16:43.377 [2024-04-18 17:33:09.651221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002372c0 len:0x10000 key:0x188a00 00:16:43.377 [2024-04-18 17:33:09.651260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000227240 len:0x10000 key:0x188a00 00:16:43.377 [2024-04-18 17:33:09.651299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002171c0 len:0x10000 key:0x188a00 00:16:43.377 [2024-04-18 17:33:09.651340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000207140 len:0x10000 key:0x188a00 00:16:43.377 [2024-04-18 17:33:09.651379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071eff80 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.651431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071dff00 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.651471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071cfe80 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.651510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071bfe00 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.651555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071afd80 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.651595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000719fd00 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.651633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000718fc80 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.651672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000717fc00 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.651711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000716fb80 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.651751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000715fb00 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.651791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000714fa80 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.651830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000713fa00 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.651870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000712f980 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.651909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000711f900 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.651948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.651978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000710f880 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.651996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.652019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070ff800 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.652036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.652058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070ef780 len:0x10000 key:0x187600 00:16:43.377 [2024-04-18 17:33:09.652076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.377 [2024-04-18 17:33:09.652098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070df700 len:0x10000 key:0x187600 00:16:43.378 [2024-04-18 17:33:09.652115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.652138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070cf680 len:0x10000 key:0x187600 00:16:43.378 [2024-04-18 17:33:09.652155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.652176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070bf600 len:0x10000 key:0x187600 00:16:43.378 [2024-04-18 17:33:09.652194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.652217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070af580 len:0x10000 key:0x187600 00:16:43.378 [2024-04-18 17:33:09.652233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.652255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000709f500 len:0x10000 key:0x187600 00:16:43.378 [2024-04-18 17:33:09.652274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.652296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000708f480 len:0x10000 key:0x187600 00:16:43.378 [2024-04-18 17:33:09.652314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.652335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000707f400 len:0x10000 key:0x187600 00:16:43.378 [2024-04-18 17:33:09.652353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.652376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000706f380 len:0x10000 key:0x187600 00:16:43.378 [2024-04-18 17:33:09.652407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.652431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000705f300 len:0x10000 key:0x187600 00:16:43.378 [2024-04-18 17:33:09.652454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.652477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000704f280 len:0x10000 key:0x187600 00:16:43.378 [2024-04-18 17:33:09.652495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.652517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000703f200 len:0x10000 key:0x187600 00:16:43.378 [2024-04-18 17:33:09.652535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.652556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000702f180 len:0x10000 key:0x187600 00:16:43.378 [2024-04-18 17:33:09.652575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.652597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000701f100 len:0x10000 key:0x187600 00:16:43.378 [2024-04-18 17:33:09.652615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.652637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2efd80 len:0x10000 key:0x187700 00:16:43.378 [2024-04-18 17:33:09.652654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.654261] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256cc0 was disconnected and freed. reset controller. 00:16:43.378 [2024-04-18 17:33:09.654296] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.378 [2024-04-18 17:33:09.655292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000046f200 len:0x10000 key:0x187200 00:16:43.378 [2024-04-18 17:33:09.655321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.655349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000045f180 len:0x10000 key:0x187200 00:16:43.378 [2024-04-18 17:33:09.655369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.655402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000044f100 len:0x10000 key:0x187200 00:16:43.378 [2024-04-18 17:33:09.655421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.655444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000043f080 len:0x10000 key:0x187200 00:16:43.378 [2024-04-18 17:33:09.655463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.655484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000042f000 len:0x10000 key:0x187200 00:16:43.378 [2024-04-18 17:33:09.655509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.655533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000041ef80 len:0x10000 key:0x187200 00:16:43.378 [2024-04-18 17:33:09.655551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.655574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000040ef00 len:0x10000 key:0x187200 00:16:43.378 [2024-04-18 17:33:09.655592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.655614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195f0000 len:0x10000 key:0x188c00 00:16:43.378 [2024-04-18 17:33:09.655633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.655655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195dff80 len:0x10000 key:0x188c00 00:16:43.378 [2024-04-18 17:33:09.655673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.655695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195cff00 len:0x10000 key:0x188c00 00:16:43.378 [2024-04-18 17:33:09.655714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.655735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195bfe80 len:0x10000 key:0x188c00 00:16:43.378 [2024-04-18 17:33:09.655753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.655774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195afe00 len:0x10000 key:0x188c00 00:16:43.378 [2024-04-18 17:33:09.655792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.655814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001959fd80 len:0x10000 key:0x188c00 00:16:43.378 [2024-04-18 17:33:09.655832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.655853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001958fd00 len:0x10000 key:0x188c00 00:16:43.378 [2024-04-18 17:33:09.655872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.655893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001957fc80 len:0x10000 key:0x188c00 00:16:43.378 [2024-04-18 17:33:09.655912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.655933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001956fc00 len:0x10000 key:0x188c00 00:16:43.378 [2024-04-18 17:33:09.655952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.655978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001955fb80 len:0x10000 key:0x188c00 00:16:43.378 [2024-04-18 17:33:09.655998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.656020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001954fb00 len:0x10000 key:0x188c00 00:16:43.378 [2024-04-18 17:33:09.656039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.656061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001953fa80 len:0x10000 key:0x188c00 00:16:43.378 [2024-04-18 17:33:09.656079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.656102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001952fa00 len:0x10000 key:0x188c00 00:16:43.378 [2024-04-18 17:33:09.656120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.656143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001951f980 len:0x10000 key:0x188c00 00:16:43.378 [2024-04-18 17:33:09.656162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.378 [2024-04-18 17:33:09.656185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001950f900 len:0x10000 key:0x188c00 00:16:43.378 [2024-04-18 17:33:09.656203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ff880 len:0x10000 key:0x188c00 00:16:43.379 [2024-04-18 17:33:09.656243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ef800 len:0x10000 key:0x188c00 00:16:43.379 [2024-04-18 17:33:09.656283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000700f080 len:0x10000 key:0x187600 00:16:43.379 [2024-04-18 17:33:09.656322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008ef180 len:0x10000 key:0x187300 00:16:43.379 [2024-04-18 17:33:09.656363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008df100 len:0x10000 key:0x187300 00:16:43.379 [2024-04-18 17:33:09.656413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008cf080 len:0x10000 key:0x187300 00:16:43.379 [2024-04-18 17:33:09.656459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008bf000 len:0x10000 key:0x187300 00:16:43.379 [2024-04-18 17:33:09.656498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008aef80 len:0x10000 key:0x187300 00:16:43.379 [2024-04-18 17:33:09.656538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000089ef00 len:0x10000 key:0x187300 00:16:43.379 [2024-04-18 17:33:09.656577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000088ee80 len:0x10000 key:0x187300 00:16:43.379 [2024-04-18 17:33:09.656617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e790000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.656657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7b1000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.656714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7d2000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.656756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e7f3000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.656797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e814000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.656839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e835000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.656881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e856000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.656928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000132d8000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.656970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.656994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000132b7000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013296000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013275000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013254000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013233000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013212000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131f1000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131d0000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9de000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9ff000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011be6000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011bc5000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ba4000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b83000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b62000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b41000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b20000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.379 [2024-04-18 17:33:09.657748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c45f000 len:0x10000 key:0x187700 00:16:43.379 [2024-04-18 17:33:09.657766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.657790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c43e000 len:0x10000 key:0x187700 00:16:43.380 [2024-04-18 17:33:09.657808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.657830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c41d000 len:0x10000 key:0x187700 00:16:43.380 [2024-04-18 17:33:09.657849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.657871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3fc000 len:0x10000 key:0x187700 00:16:43.380 [2024-04-18 17:33:09.657895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.657919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3db000 len:0x10000 key:0x187700 00:16:43.380 [2024-04-18 17:33:09.657937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.657960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3ba000 len:0x10000 key:0x187700 00:16:43.380 [2024-04-18 17:33:09.657979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.658002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c399000 len:0x10000 key:0x187700 00:16:43.380 [2024-04-18 17:33:09.658019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32673 cdw0:8708990 sqhd:9f24 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.660767] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256a80 was disconnected and freed. reset controller. 00:16:43.380 [2024-04-18 17:33:09.660802] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.380 [2024-04-18 17:33:09.660828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196dfd80 len:0x10000 key:0x188500 00:16:43.380 [2024-04-18 17:33:09.660847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.660875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfd00 len:0x10000 key:0x188500 00:16:43.380 [2024-04-18 17:33:09.660895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.660917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfc80 len:0x10000 key:0x188500 00:16:43.380 [2024-04-18 17:33:09.660935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.660957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196afc00 len:0x10000 key:0x188500 00:16:43.380 [2024-04-18 17:33:09.660975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.660997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001969fb80 len:0x10000 key:0x188500 00:16:43.380 [2024-04-18 17:33:09.661014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001968fb00 len:0x10000 key:0x188500 00:16:43.380 [2024-04-18 17:33:09.661053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967fa80 len:0x10000 key:0x188500 00:16:43.380 [2024-04-18 17:33:09.661093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001966fa00 len:0x10000 key:0x188500 00:16:43.380 [2024-04-18 17:33:09.661140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001965f980 len:0x10000 key:0x188500 00:16:43.380 [2024-04-18 17:33:09.661180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001964f900 len:0x10000 key:0x188500 00:16:43.380 [2024-04-18 17:33:09.661219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001963f880 len:0x10000 key:0x188500 00:16:43.380 [2024-04-18 17:33:09.661258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001962f800 len:0x10000 key:0x188500 00:16:43.380 [2024-04-18 17:33:09.661297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001961f780 len:0x10000 key:0x188500 00:16:43.380 [2024-04-18 17:33:09.661336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001960f700 len:0x10000 key:0x188500 00:16:43.380 [2024-04-18 17:33:09.661376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194df780 len:0x10000 key:0x188c00 00:16:43.380 [2024-04-18 17:33:09.661434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194cf700 len:0x10000 key:0x188c00 00:16:43.380 [2024-04-18 17:33:09.661474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194bf680 len:0x10000 key:0x188c00 00:16:43.380 [2024-04-18 17:33:09.661515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194af600 len:0x10000 key:0x188c00 00:16:43.380 [2024-04-18 17:33:09.661555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001949f580 len:0x10000 key:0x188c00 00:16:43.380 [2024-04-18 17:33:09.661600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001948f500 len:0x10000 key:0x188c00 00:16:43.380 [2024-04-18 17:33:09.661641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001947f480 len:0x10000 key:0x188c00 00:16:43.380 [2024-04-18 17:33:09.661692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001946f400 len:0x10000 key:0x188c00 00:16:43.380 [2024-04-18 17:33:09.661731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001945f380 len:0x10000 key:0x188c00 00:16:43.380 [2024-04-18 17:33:09.661770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001944f300 len:0x10000 key:0x188c00 00:16:43.380 [2024-04-18 17:33:09.661811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001943f280 len:0x10000 key:0x188c00 00:16:43.380 [2024-04-18 17:33:09.661851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001942f200 len:0x10000 key:0x188c00 00:16:43.380 [2024-04-18 17:33:09.661898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001941f180 len:0x10000 key:0x188c00 00:16:43.380 [2024-04-18 17:33:09.661939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.661960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001940f100 len:0x10000 key:0x188c00 00:16:43.380 [2024-04-18 17:33:09.661978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.662001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199f0000 len:0x10000 key:0x188e00 00:16:43.380 [2024-04-18 17:33:09.662019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.380 [2024-04-18 17:33:09.662041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199dff80 len:0x10000 key:0x188e00 00:16:43.381 [2024-04-18 17:33:09.662063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199cff00 len:0x10000 key:0x188e00 00:16:43.381 [2024-04-18 17:33:09.662104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199bfe80 len:0x10000 key:0x188e00 00:16:43.381 [2024-04-18 17:33:09.662146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed9f000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed7e000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed5d000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed3c000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed1b000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecfa000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecd9000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecb8000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec97000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec76000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec55000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec34000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec13000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebf2000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebd1000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebb0000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c378000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c357000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c336000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.662975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c315000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.662997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.663021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2f4000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.663039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.663062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2d3000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.663081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.663104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2b2000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.663122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.663146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c291000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.663164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.663188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c270000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.663206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.663228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c66f000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.663254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.663278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c64e000 len:0x10000 key:0x187700 00:16:43.381 [2024-04-18 17:33:09.663296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.381 [2024-04-18 17:33:09.663318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c62d000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.663337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.663360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c60c000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.663377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.663414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5eb000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.663440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.663462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5ca000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.663480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.663507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5a9000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.663526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256a80 sqhd:a600 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.665936] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256840 was disconnected and freed. reset controller. 00:16:43.382 [2024-04-18 17:33:09.665963] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.382 [2024-04-18 17:33:09.665987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1bf000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f19e000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f17d000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f15c000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f13b000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f11a000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0f9000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0d8000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0b7000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f096000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f075000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f054000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f033000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f012000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000eff1000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000efd0000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c588000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c567000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c546000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c525000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c504000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4e3000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.666976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4c2000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.666994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.667016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4a1000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.667034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.667057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c480000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.667075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.667097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c87f000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.667117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.667140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c85e000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.667159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.667182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c83d000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.667201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.667224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c81c000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.667242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.667265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7fb000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.667284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.382 [2024-04-18 17:33:09.667307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7da000 len:0x10000 key:0x187700 00:16:43.382 [2024-04-18 17:33:09.667326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.667349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7b9000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.667368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.667410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca8f000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.667434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.667457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca6e000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.667476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.667499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca4d000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.667517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.667541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca2c000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.667559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.667582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca0b000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.667600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.667623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9ea000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.667641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.667664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9c9000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.667682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.667705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9a8000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.667725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.667748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c987000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.667767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.667789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c966000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.667808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.667832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c945000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.667851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.667874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c924000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.667897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.667922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c903000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.667940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.667963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8e2000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.667982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8c1000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8a0000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc9f000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc7e000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc5d000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc3c000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc1b000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbfa000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbd9000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbb8000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb97000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb76000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb55000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb34000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb13000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000caf2000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cad1000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.668744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cab0000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.668763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256840 sqhd:2010 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.671224] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256600 was disconnected and freed. reset controller. 00:16:43.383 [2024-04-18 17:33:09.671258] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.383 [2024-04-18 17:33:09.671284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5df000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.671304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.383 [2024-04-18 17:33:09.671340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5be000 len:0x10000 key:0x187700 00:16:43.383 [2024-04-18 17:33:09.671362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.671394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f59d000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.671415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.671444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f57c000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.671463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.671485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f55b000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.671504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.671526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f53a000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.671545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.671567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f519000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.671585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.671607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4f8000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.671625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.671648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4d7000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.671666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.671688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f4b6000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.671718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.671741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f495000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.671759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.671782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f474000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.671800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.671827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f453000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.671847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.671869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f432000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.671888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.671910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f411000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.671928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.671950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f3f0000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.671969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.671992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceaf000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce8e000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce6d000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce4c000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce2b000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce0a000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cde9000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cdc8000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cda7000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd86000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd65000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd44000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd23000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cd02000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cce1000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ccc0000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d0bf000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d09e000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d07d000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d05c000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d03b000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d01a000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.384 [2024-04-18 17:33:09.672944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cff9000 len:0x10000 key:0x187700 00:16:43.384 [2024-04-18 17:33:09.672961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.672984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfd8000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cfb7000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf96000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf75000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf54000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf33000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cf12000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cef1000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ced0000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2cf000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2ae000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d28d000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d26c000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d24b000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d22a000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d209000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d1e8000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d1c7000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d1a6000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d185000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d164000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d143000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d122000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.673955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d101000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.673972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.683280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d0e0000 len:0x10000 key:0x187700 00:16:43.385 [2024-04-18 17:33:09.683315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256600 sqhd:00b0 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.685749] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192563c0 was disconnected and freed. reset controller. 00:16:43.385 [2024-04-18 17:33:09.685783] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.385 [2024-04-18 17:33:09.685808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a44f900 len:0x10000 key:0x187c00 00:16:43.385 [2024-04-18 17:33:09.685828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.685867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a43f880 len:0x10000 key:0x187c00 00:16:43.385 [2024-04-18 17:33:09.685890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.685914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a42f800 len:0x10000 key:0x187c00 00:16:43.385 [2024-04-18 17:33:09.685933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.685956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a41f780 len:0x10000 key:0x187c00 00:16:43.385 [2024-04-18 17:33:09.685979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.686002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a40f700 len:0x10000 key:0x187c00 00:16:43.385 [2024-04-18 17:33:09.686020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.385 [2024-04-18 17:33:09.686042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7f0000 len:0x10000 key:0x188800 00:16:43.386 [2024-04-18 17:33:09.686060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7dff80 len:0x10000 key:0x188800 00:16:43.386 [2024-04-18 17:33:09.686100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7cff00 len:0x10000 key:0x188800 00:16:43.386 [2024-04-18 17:33:09.686140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7bfe80 len:0x10000 key:0x188800 00:16:43.386 [2024-04-18 17:33:09.686179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e037000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e058000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c777000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c798000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c07000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c28000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f939000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f95a000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f97b000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e016000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dff5000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfd4000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfb3000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df92000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df71000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df50000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4df000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4be000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.686980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d49d000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.686998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.687022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d47c000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.687040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.687063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d45b000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.687082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.687106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d43a000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.687124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.687147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d419000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.687166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.687194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3f8000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.687213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.687236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3d7000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.687255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.687278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d3b6000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.687296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.687319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d395000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.687338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.687361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d374000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.687379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.687412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d353000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.687439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.687463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d332000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.687482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.687505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d311000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.687523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.687545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2f0000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.687563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.386 [2024-04-18 17:33:09.687586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ef000 len:0x10000 key:0x187700 00:16:43.386 [2024-04-18 17:33:09.687605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.687628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ce000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.687646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.687680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ad000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.687698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.687721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d68c000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.687739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.687763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d66b000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.687782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.687805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d64a000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.687824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.687847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d629000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.687867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.687893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d608000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.687918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.687942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5e7000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.687961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.687984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5c6000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.688002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.688024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d5a5000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.688043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.688066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d584000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.688084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.688106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d563000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.688123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.688146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d542000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.688163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.688185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d521000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.688204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.688227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d500000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.688245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.688267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8ff000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.688286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.688308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8de000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.688325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.688348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8bd000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.688366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.688402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d89c000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.688427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.688465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d87b000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.688485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.688508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d85a000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.688526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.688548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d839000 len:0x10000 key:0x187700 00:16:43.387 [2024-04-18 17:33:09.688567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:192563c0 sqhd:e010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.691021] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:16:43.387 [2024-04-18 17:33:09.691047] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.387 [2024-04-18 17:33:09.691070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aadfd80 len:0x10000 key:0x188100 00:16:43.387 [2024-04-18 17:33:09.691088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.691114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aacfd00 len:0x10000 key:0x188100 00:16:43.387 [2024-04-18 17:33:09.691134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.691156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aabfc80 len:0x10000 key:0x188100 00:16:43.387 [2024-04-18 17:33:09.691175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.691197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaafc00 len:0x10000 key:0x188100 00:16:43.387 [2024-04-18 17:33:09.691215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.691236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa9fb80 len:0x10000 key:0x188100 00:16:43.387 [2024-04-18 17:33:09.691255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.691276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa8fb00 len:0x10000 key:0x188100 00:16:43.387 [2024-04-18 17:33:09.691294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.691316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa7fa80 len:0x10000 key:0x188100 00:16:43.387 [2024-04-18 17:33:09.691339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.691363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa6fa00 len:0x10000 key:0x188100 00:16:43.387 [2024-04-18 17:33:09.691390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.691425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa5f980 len:0x10000 key:0x188100 00:16:43.387 [2024-04-18 17:33:09.691444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.691466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa4f900 len:0x10000 key:0x188100 00:16:43.387 [2024-04-18 17:33:09.691483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.691505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa3f880 len:0x10000 key:0x188100 00:16:43.387 [2024-04-18 17:33:09.691524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.691545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x188100 00:16:43.387 [2024-04-18 17:33:09.691563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.691585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x188100 00:16:43.387 [2024-04-18 17:33:09.691603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.387 [2024-04-18 17:33:09.691625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x188100 00:16:43.388 [2024-04-18 17:33:09.691643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.691665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x187f00 00:16:43.388 [2024-04-18 17:33:09.691683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.691705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a82f200 len:0x10000 key:0x187f00 00:16:43.388 [2024-04-18 17:33:09.691724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.691746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f1f000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.691764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.691787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011efe000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.691808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.691833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011edd000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.691852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.691874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ebc000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.691892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.691916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e9b000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.691936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.691959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e7a000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.691978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e59000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e38000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e17000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011df6000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011dd5000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011db4000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d93000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d72000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d51000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d30000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db0f000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daee000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dacd000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daac000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da8b000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da6a000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da49000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da28000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da07000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9e6000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9c5000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9a4000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d983000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.692963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.692986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d962000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.693005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.693027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d941000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.693045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.693068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d920000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.693086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.693113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd1f000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.693132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.388 [2024-04-18 17:33:09.693154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcfe000 len:0x10000 key:0x187700 00:16:43.388 [2024-04-18 17:33:09.693173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.693196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcdd000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.693215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.693238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcbc000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.693261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.693286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc9b000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.693304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.693327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc7a000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.693345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.693369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc59000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.693396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.693431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc38000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.693450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.693473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc17000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.693492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.693514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbf6000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.693533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.693556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbd5000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.693574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.693597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbb4000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.693615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.693639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db93000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.693657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.693689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db72000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.693708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.693730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db51000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.693753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.693777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db30000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.693796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:19256180 sqhd:3010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.696402] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806c00 was disconnected and freed. reset controller. 00:16:43.389 [2024-04-18 17:33:09.696446] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.389 [2024-04-18 17:33:09.696471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001023f000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.696491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.696521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001021e000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.696541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.696565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101fd000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.696583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.696606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101dc000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.696624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.696649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000101bb000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.696668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.696690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001019a000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.696708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.696731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010179000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.696750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.696772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010158000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.696790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.696814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010137000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.696832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.696862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010116000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.696882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.696905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100f5000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.696923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.696945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100d4000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.696964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.696987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000100b3000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.697005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.697028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010092000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.697046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.697069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010071000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.697087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.697110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010050000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.697128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.697151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010680000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.697169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.697192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106a1000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.697210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.697232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106c2000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.697250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.697274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106e3000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.697292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.697316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010704000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.697339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.389 [2024-04-18 17:33:09.697363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010725000 len:0x10000 key:0x187700 00:16:43.389 [2024-04-18 17:33:09.697391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.697426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010746000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.697445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.697468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010767000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.697486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.697509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010788000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.697527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.697550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000107a9000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.697569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.697592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000107ca000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.697611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.697633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000107eb000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.697652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.697675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001080c000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.697702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.697724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001082d000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.697743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.697765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001084e000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.697783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.697806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001086f000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.697828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.697856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c8f000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.697875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.697898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c6e000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.697916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.697939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c4d000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.697958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.697980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c2c000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.697998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c0b000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010bea000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010bc9000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ba8000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b87000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b66000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b45000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b24000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010b03000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ae2000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ac1000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010aa0000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e9f000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e7e000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e5d000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e3c000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010e1b000 len:0x10000 key:0x187700 00:16:43.390 [2024-04-18 17:33:09.698734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.390 [2024-04-18 17:33:09.698758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010dfa000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.698776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.698803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010dd9000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.698823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.698846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010db8000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.698863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.698886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010d97000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.698904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.698927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010d76000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.698945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.698968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010d55000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.698986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.699008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010d34000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.699027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.699048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010d13000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.699066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.699089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010cf2000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.699107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.699129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010cd1000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.699147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.699171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010cb0000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.699188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b806c00 sqhd:8010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.701694] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8069c0 was disconnected and freed. reset controller. 00:16:43.391 [2024-04-18 17:33:09.701729] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.391 [2024-04-18 17:33:09.701754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010470000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.701779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.701820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010491000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.701844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.701869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104b2000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.701887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.701910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104d3000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.701929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.701952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104f4000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.701971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.701994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010515000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010536000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010557000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010578000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010599000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105ba000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105db000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105fc000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001061d000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001063e000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001065f000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110af000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001108e000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001106d000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001104c000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001102b000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001100a000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010fe9000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010fc8000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.391 [2024-04-18 17:33:09.702828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010fa7000 len:0x10000 key:0x187700 00:16:43.391 [2024-04-18 17:33:09.702847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.702869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f86000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.702889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.702912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f65000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.702930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.702953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f44000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.702972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.702995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f23000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f02000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ee1000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ec0000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000112bf000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001129e000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001127d000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001125c000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001123b000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001121a000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000111f9000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000111d8000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000111b7000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011196000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011175000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011154000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011133000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011112000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110f1000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110d0000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000114cf000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000114ae000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001148d000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.703973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.703997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001146c000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.704015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.704039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001144b000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.704057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.704080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001142a000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.704099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.704122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011409000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.704140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.704163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000113e8000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.704182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.704205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000113c7000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.704224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.704252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000113a6000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.704272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.704296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011385000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.704314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.704337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011364000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.704356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.704378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011343000 len:0x10000 key:0x187700 00:16:43.392 [2024-04-18 17:33:09.704443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.392 [2024-04-18 17:33:09.704468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011322000 len:0x10000 key:0x187700 00:16:43.393 [2024-04-18 17:33:09.704487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.393 [2024-04-18 17:33:09.704510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011301000 len:0x10000 key:0x187700 00:16:43.393 [2024-04-18 17:33:09.704528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.393 [2024-04-18 17:33:09.704550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000112e0000 len:0x10000 key:0x187700 00:16:43.393 [2024-04-18 17:33:09.704569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1b8069c0 sqhd:1010 p:0 m:0 dnr:0 00:16:43.393 [2024-04-18 17:33:09.724694] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806780 was disconnected and freed. reset controller. 00:16:43.393 [2024-04-18 17:33:09.724728] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.393 [2024-04-18 17:33:09.724815] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.393 [2024-04-18 17:33:09.724842] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.393 [2024-04-18 17:33:09.724862] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.393 [2024-04-18 17:33:09.724881] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.393 [2024-04-18 17:33:09.724901] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.393 [2024-04-18 17:33:09.724920] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.393 [2024-04-18 17:33:09.724940] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.393 [2024-04-18 17:33:09.724959] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.393 [2024-04-18 17:33:09.724978] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.393 [2024-04-18 17:33:09.725003] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:43.393 task offset: 32768 on job bdev=Nvme1n1 fails 00:16:43.393 00:16:43.393 Latency(us) 00:16:43.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.393 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.393 Job: Nvme1n1 ended in about 1.82 seconds with error 00:16:43.393 Verification LBA range: start 0x0 length 0x400 00:16:43.393 Nvme1n1 : 1.82 114.38 7.15 35.19 0.00 425656.32 27767.85 1012846.74 00:16:43.393 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.393 Job: Nvme2n1 ended in about 1.82 seconds with error 00:16:43.393 Verification LBA range: start 0x0 length 0x400 00:16:43.393 Nvme2n1 : 1.82 105.31 6.58 35.10 0.00 448289.00 39030.33 1074984.58 00:16:43.393 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.393 Job: Nvme3n1 ended in about 1.83 seconds with error 00:16:43.393 Verification LBA range: start 0x0 length 0x400 00:16:43.393 Nvme3n1 : 1.83 108.94 6.81 35.04 0.00 433428.26 5558.42 1068770.80 00:16:43.393 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.393 Job: Nvme4n1 ended in about 1.83 seconds with error 00:16:43.393 Verification LBA range: start 0x0 length 0x400 00:16:43.393 Nvme4n1 : 1.83 122.26 7.64 34.93 0.00 392871.38 8495.41 1062557.01 00:16:43.393 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.393 Job: Nvme5n1 ended in about 1.84 seconds with error 00:16:43.393 Verification LBA range: start 0x0 length 0x400 00:16:43.393 Nvme5n1 : 1.84 104.49 6.53 34.83 0.00 438186.67 38253.61 1062557.01 00:16:43.393 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.393 Job: Nvme6n1 ended in about 1.85 seconds with error 00:16:43.393 Verification LBA range: start 0x0 length 0x400 00:16:43.393 Nvme6n1 : 1.85 103.67 6.48 34.56 0.00 435544.37 55924.05 1074984.58 00:16:43.393 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.393 Job: Nvme7n1 ended in about 1.86 seconds with error 00:16:43.393 Verification LBA range: start 0x0 length 0x400 00:16:43.393 Nvme7n1 : 1.86 108.23 6.76 34.46 0.00 420236.98 15825.73 1074984.58 00:16:43.393 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.393 Job: Nvme8n1 ended in about 1.86 seconds with error 00:16:43.393 Verification LBA range: start 0x0 length 0x400 00:16:43.393 Nvme8n1 : 1.86 111.68 6.98 34.36 0.00 406514.17 18155.90 1074984.58 00:16:43.393 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.393 Job: Nvme9n1 ended in about 1.87 seconds with error 00:16:43.393 Verification LBA range: start 0x0 length 0x400 00:16:43.393 Nvme9n1 : 1.87 102.79 6.42 34.26 0.00 427916.71 57865.86 1074984.58 00:16:43.393 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.393 Job: Nvme10n1 ended in about 1.87 seconds with error 00:16:43.393 Verification LBA range: start 0x0 length 0x400 00:16:43.393 Nvme10n1 : 1.87 68.33 4.27 34.17 0.00 566671.99 65244.73 1081198.36 00:16:43.393 =================================================================================================================== 00:16:43.393 Total : 1050.09 65.63 346.91 0.00 435417.93 5558.42 1081198.36 00:16:43.393 [2024-04-18 17:33:09.756438] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:43.393 [2024-04-18 17:33:09.756528] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:43.393 [2024-04-18 17:33:09.756571] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:16:43.393 [2024-04-18 17:33:09.756592] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:16:43.393 [2024-04-18 17:33:09.756629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:16:43.393 [2024-04-18 17:33:09.756818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:16:43.393 [2024-04-18 17:33:09.756848] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:16:43.393 [2024-04-18 17:33:09.756868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:16:43.393 [2024-04-18 17:33:09.756886] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:16:43.393 [2024-04-18 17:33:09.756904] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:16:43.393 [2024-04-18 17:33:09.756922] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:16:43.393 [2024-04-18 17:33:09.771298] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:43.393 [2024-04-18 17:33:09.771332] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:43.393 [2024-04-18 17:33:09.771348] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:16:43.393 [2024-04-18 17:33:09.771453] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:43.393 [2024-04-18 17:33:09.771480] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:43.393 [2024-04-18 17:33:09.771493] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5380 00:16:43.393 [2024-04-18 17:33:09.771554] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:43.393 [2024-04-18 17:33:09.771576] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:43.393 [2024-04-18 17:33:09.771589] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ba540 00:16:43.393 [2024-04-18 17:33:09.771652] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:43.393 [2024-04-18 17:33:09.771674] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:43.393 [2024-04-18 17:33:09.771686] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c00 00:16:43.393 [2024-04-18 17:33:09.771851] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:43.393 [2024-04-18 17:33:09.771875] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:43.393 [2024-04-18 17:33:09.771888] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928f280 00:16:43.393 [2024-04-18 17:33:09.771957] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:43.393 [2024-04-18 17:33:09.771981] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:43.393 [2024-04-18 17:33:09.771994] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019298cc0 00:16:43.393 [2024-04-18 17:33:09.772060] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:43.393 [2024-04-18 17:33:09.772083] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:43.393 [2024-04-18 17:33:09.772096] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929c140 00:16:43.393 [2024-04-18 17:33:09.772175] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:43.393 [2024-04-18 17:33:09.772203] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:43.393 [2024-04-18 17:33:09.772217] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192b54c0 00:16:43.393 [2024-04-18 17:33:09.772285] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:43.393 [2024-04-18 17:33:09.772308] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:43.393 [2024-04-18 17:33:09.772320] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bd440 00:16:43.393 [2024-04-18 17:33:09.772395] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:16:43.393 [2024-04-18 17:33:09.772419] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:16:43.393 [2024-04-18 17:33:09.772432] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c60c0 00:16:43.962 17:33:10 -- target/shutdown.sh@142 -- # kill -9 2031835 00:16:43.962 17:33:10 -- target/shutdown.sh@144 -- # stoptarget 00:16:43.962 17:33:10 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:43.962 17:33:10 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:43.962 17:33:10 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:43.962 17:33:10 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:43.962 17:33:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:43.962 17:33:10 -- nvmf/common.sh@117 -- # sync 00:16:43.962 17:33:10 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:43.962 17:33:10 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:43.962 17:33:10 -- nvmf/common.sh@120 -- # set +e 00:16:43.962 17:33:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.962 17:33:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:43.962 rmmod nvme_rdma 00:16:43.962 rmmod nvme_fabrics 00:16:43.962 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 2031835 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:16:43.962 17:33:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.962 17:33:10 -- nvmf/common.sh@124 -- # set -e 00:16:43.962 17:33:10 -- nvmf/common.sh@125 -- # return 0 00:16:43.962 17:33:10 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:16:43.962 17:33:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:43.962 17:33:10 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:16:43.962 00:16:43.962 real 0m4.744s 00:16:43.962 user 0m15.947s 00:16:43.962 sys 0m1.210s 00:16:43.962 17:33:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:43.962 17:33:10 -- common/autotest_common.sh@10 -- # set +x 00:16:43.962 ************************************ 00:16:43.962 END TEST nvmf_shutdown_tc3 00:16:43.962 ************************************ 00:16:43.962 17:33:10 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:16:43.962 00:16:43.963 real 0m19.846s 00:16:43.963 user 1m9.232s 00:16:43.963 sys 0m4.981s 00:16:43.963 17:33:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:43.963 17:33:10 -- common/autotest_common.sh@10 -- # set +x 00:16:43.963 ************************************ 00:16:43.963 END TEST nvmf_shutdown 00:16:43.963 ************************************ 00:16:43.963 17:33:10 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:16:43.963 17:33:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:43.963 17:33:10 -- common/autotest_common.sh@10 -- # set +x 00:16:43.963 17:33:10 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:16:43.963 17:33:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:43.963 17:33:10 -- common/autotest_common.sh@10 -- # set +x 00:16:43.963 17:33:10 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:16:43.963 17:33:10 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:16:43.963 17:33:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:43.963 17:33:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.963 17:33:10 -- common/autotest_common.sh@10 -- # set +x 00:16:43.963 ************************************ 00:16:43.963 START TEST nvmf_multicontroller 00:16:43.963 ************************************ 00:16:43.963 17:33:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:16:43.963 * Looking for test storage... 00:16:43.963 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:43.963 17:33:10 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:43.963 17:33:10 -- nvmf/common.sh@7 -- # uname -s 00:16:43.963 17:33:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.963 17:33:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.963 17:33:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.963 17:33:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.963 17:33:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.963 17:33:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.963 17:33:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.963 17:33:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.963 17:33:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.963 17:33:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.963 17:33:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.963 17:33:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.963 17:33:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.963 17:33:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.963 17:33:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.963 17:33:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.963 17:33:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:43.963 17:33:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.963 17:33:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.963 17:33:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.963 17:33:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.963 17:33:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.963 17:33:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.963 17:33:10 -- paths/export.sh@5 -- # export PATH 00:16:43.963 17:33:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.963 17:33:10 -- nvmf/common.sh@47 -- # : 0 00:16:43.963 17:33:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:43.963 17:33:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:43.963 17:33:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.963 17:33:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.963 17:33:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.963 17:33:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:43.963 17:33:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:43.963 17:33:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:43.963 17:33:10 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:43.963 17:33:10 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:43.963 17:33:10 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:43.963 17:33:10 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:43.963 17:33:10 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:43.963 17:33:10 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:16:43.963 17:33:10 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:16:43.963 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:16:43.963 17:33:10 -- host/multicontroller.sh@20 -- # exit 0 00:16:43.963 00:16:43.963 real 0m0.073s 00:16:43.963 user 0m0.039s 00:16:43.963 sys 0m0.040s 00:16:43.963 17:33:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:43.963 17:33:10 -- common/autotest_common.sh@10 -- # set +x 00:16:43.963 ************************************ 00:16:43.963 END TEST nvmf_multicontroller 00:16:43.963 ************************************ 00:16:44.222 17:33:10 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:16:44.222 17:33:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:44.222 17:33:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:44.222 17:33:10 -- common/autotest_common.sh@10 -- # set +x 00:16:44.222 ************************************ 00:16:44.222 START TEST nvmf_aer 00:16:44.222 ************************************ 00:16:44.222 17:33:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:16:44.222 * Looking for test storage... 00:16:44.222 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:44.222 17:33:10 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.222 17:33:10 -- nvmf/common.sh@7 -- # uname -s 00:16:44.222 17:33:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.222 17:33:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.222 17:33:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.222 17:33:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.222 17:33:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.222 17:33:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.222 17:33:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.222 17:33:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.222 17:33:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.222 17:33:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.222 17:33:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.222 17:33:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.222 17:33:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.222 17:33:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.222 17:33:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.222 17:33:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.222 17:33:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:44.222 17:33:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.222 17:33:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.222 17:33:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.222 17:33:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.222 17:33:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.223 17:33:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.223 17:33:10 -- paths/export.sh@5 -- # export PATH 00:16:44.223 17:33:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.223 17:33:10 -- nvmf/common.sh@47 -- # : 0 00:16:44.223 17:33:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.223 17:33:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.223 17:33:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.223 17:33:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.223 17:33:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.223 17:33:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.223 17:33:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.223 17:33:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.223 17:33:10 -- host/aer.sh@11 -- # nvmftestinit 00:16:44.223 17:33:10 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:16:44.223 17:33:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.223 17:33:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:44.223 17:33:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:44.223 17:33:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:44.223 17:33:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.223 17:33:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.223 17:33:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.223 17:33:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:44.223 17:33:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:44.223 17:33:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:44.223 17:33:10 -- common/autotest_common.sh@10 -- # set +x 00:16:46.131 17:33:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:46.131 17:33:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:46.131 17:33:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:46.131 17:33:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:46.131 17:33:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:46.131 17:33:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:46.131 17:33:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:46.131 17:33:12 -- nvmf/common.sh@295 -- # net_devs=() 00:16:46.131 17:33:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:46.131 17:33:12 -- nvmf/common.sh@296 -- # e810=() 00:16:46.131 17:33:12 -- nvmf/common.sh@296 -- # local -ga e810 00:16:46.131 17:33:12 -- nvmf/common.sh@297 -- # x722=() 00:16:46.131 17:33:12 -- nvmf/common.sh@297 -- # local -ga x722 00:16:46.131 17:33:12 -- nvmf/common.sh@298 -- # mlx=() 00:16:46.131 17:33:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:46.131 17:33:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.131 17:33:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.131 17:33:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.131 17:33:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.131 17:33:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.131 17:33:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.131 17:33:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.131 17:33:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.131 17:33:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.131 17:33:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.131 17:33:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.131 17:33:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:46.131 17:33:12 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:46.131 17:33:12 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:46.131 17:33:12 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:46.131 17:33:12 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:46.131 17:33:12 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:46.131 17:33:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:46.131 17:33:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:46.131 17:33:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:16:46.131 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:16:46.131 17:33:12 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:46.131 17:33:12 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:46.131 17:33:12 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:46.131 17:33:12 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:46.131 17:33:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:46.131 17:33:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:16:46.131 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:16:46.131 17:33:12 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:46.131 17:33:12 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:46.131 17:33:12 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:46.131 17:33:12 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:46.131 17:33:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:46.131 17:33:12 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:46.131 17:33:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:46.131 17:33:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.131 17:33:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:46.131 17:33:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.131 17:33:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:16:46.131 Found net devices under 0000:09:00.0: mlx_0_0 00:16:46.131 17:33:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.131 17:33:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:46.131 17:33:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.131 17:33:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:46.131 17:33:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.132 17:33:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:16:46.132 Found net devices under 0000:09:00.1: mlx_0_1 00:16:46.132 17:33:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.132 17:33:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:46.132 17:33:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:46.132 17:33:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:46.132 17:33:12 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:16:46.132 17:33:12 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:16:46.132 17:33:12 -- nvmf/common.sh@409 -- # rdma_device_init 00:16:46.132 17:33:12 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:16:46.132 17:33:12 -- nvmf/common.sh@58 -- # uname 00:16:46.132 17:33:12 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:46.132 17:33:12 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:46.132 17:33:12 -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:46.132 17:33:12 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:46.132 17:33:12 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:46.132 17:33:12 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:46.132 17:33:12 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:46.132 17:33:12 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:46.132 17:33:12 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:16:46.132 17:33:12 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:46.132 17:33:12 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:46.132 17:33:12 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:46.132 17:33:12 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:46.132 17:33:12 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:46.132 17:33:12 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:46.132 17:33:12 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:46.132 17:33:12 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:46.132 17:33:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.132 17:33:12 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:46.132 17:33:12 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:46.132 17:33:12 -- nvmf/common.sh@105 -- # continue 2 00:16:46.132 17:33:12 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:46.132 17:33:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.132 17:33:12 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:46.132 17:33:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.132 17:33:12 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:46.132 17:33:12 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:46.132 17:33:12 -- nvmf/common.sh@105 -- # continue 2 00:16:46.132 17:33:12 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:46.132 17:33:12 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:46.132 17:33:12 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:46.132 17:33:12 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:46.132 17:33:12 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:46.132 17:33:12 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:46.132 17:33:12 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:46.132 17:33:12 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:46.132 17:33:12 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:46.132 82: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:46.132 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:16:46.132 altname enp9s0f0np0 00:16:46.132 inet 192.168.100.8/24 scope global mlx_0_0 00:16:46.132 valid_lft forever preferred_lft forever 00:16:46.132 17:33:12 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:46.132 17:33:12 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:46.132 17:33:12 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:46.132 17:33:12 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:46.132 17:33:12 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:46.132 17:33:12 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:46.132 17:33:12 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:46.132 17:33:12 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:46.132 17:33:12 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:46.132 83: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:46.132 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:16:46.132 altname enp9s0f1np1 00:16:46.132 inet 192.168.100.9/24 scope global mlx_0_1 00:16:46.132 valid_lft forever preferred_lft forever 00:16:46.132 17:33:12 -- nvmf/common.sh@411 -- # return 0 00:16:46.132 17:33:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:46.132 17:33:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:46.132 17:33:12 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:16:46.132 17:33:12 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:16:46.132 17:33:12 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:46.132 17:33:12 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:46.132 17:33:12 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:46.132 17:33:12 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:46.132 17:33:12 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:46.132 17:33:12 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:46.132 17:33:12 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:46.132 17:33:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.132 17:33:12 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:46.132 17:33:12 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:46.132 17:33:12 -- nvmf/common.sh@105 -- # continue 2 00:16:46.132 17:33:12 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:46.132 17:33:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.132 17:33:12 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:46.132 17:33:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:46.132 17:33:12 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:46.132 17:33:12 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:46.132 17:33:12 -- nvmf/common.sh@105 -- # continue 2 00:16:46.132 17:33:12 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:46.132 17:33:12 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:46.132 17:33:12 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:46.132 17:33:12 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:46.132 17:33:12 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:46.132 17:33:12 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:46.132 17:33:12 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:46.132 17:33:12 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:46.132 17:33:12 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:46.132 17:33:12 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:46.132 17:33:12 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:46.132 17:33:12 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:46.132 17:33:12 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:16:46.132 192.168.100.9' 00:16:46.132 17:33:12 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:46.132 192.168.100.9' 00:16:46.132 17:33:12 -- nvmf/common.sh@446 -- # head -n 1 00:16:46.132 17:33:12 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:46.132 17:33:12 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:16:46.132 192.168.100.9' 00:16:46.132 17:33:12 -- nvmf/common.sh@447 -- # tail -n +2 00:16:46.132 17:33:12 -- nvmf/common.sh@447 -- # head -n 1 00:16:46.132 17:33:12 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:46.132 17:33:12 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:16:46.132 17:33:12 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:46.132 17:33:12 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:16:46.133 17:33:12 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:16:46.133 17:33:12 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:16:46.133 17:33:12 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:46.133 17:33:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:46.133 17:33:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:46.133 17:33:12 -- common/autotest_common.sh@10 -- # set +x 00:16:46.133 17:33:12 -- nvmf/common.sh@470 -- # nvmfpid=2034531 00:16:46.133 17:33:12 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:46.133 17:33:12 -- nvmf/common.sh@471 -- # waitforlisten 2034531 00:16:46.133 17:33:12 -- common/autotest_common.sh@817 -- # '[' -z 2034531 ']' 00:16:46.133 17:33:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.133 17:33:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:46.133 17:33:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.133 17:33:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:46.392 17:33:12 -- common/autotest_common.sh@10 -- # set +x 00:16:46.392 [2024-04-18 17:33:12.712569] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:16:46.392 [2024-04-18 17:33:12.712645] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.392 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.392 [2024-04-18 17:33:12.775559] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.392 [2024-04-18 17:33:12.889770] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.392 [2024-04-18 17:33:12.889820] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.392 [2024-04-18 17:33:12.889845] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.392 [2024-04-18 17:33:12.889857] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.392 [2024-04-18 17:33:12.889868] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.392 [2024-04-18 17:33:12.889971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.392 [2024-04-18 17:33:12.890023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.392 [2024-04-18 17:33:12.890142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.392 [2024-04-18 17:33:12.890144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.328 17:33:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:47.328 17:33:13 -- common/autotest_common.sh@850 -- # return 0 00:16:47.328 17:33:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:47.328 17:33:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:47.328 17:33:13 -- common/autotest_common.sh@10 -- # set +x 00:16:47.328 17:33:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.328 17:33:13 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:47.328 17:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.328 17:33:13 -- common/autotest_common.sh@10 -- # set +x 00:16:47.328 [2024-04-18 17:33:13.679066] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1682b60/0x1687050) succeed. 00:16:47.328 [2024-04-18 17:33:13.689871] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1684150/0x16c86e0) succeed. 00:16:47.328 17:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.328 17:33:13 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:47.328 17:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.328 17:33:13 -- common/autotest_common.sh@10 -- # set +x 00:16:47.328 Malloc0 00:16:47.328 17:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.328 17:33:13 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:47.328 17:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.328 17:33:13 -- common/autotest_common.sh@10 -- # set +x 00:16:47.328 17:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.328 17:33:13 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:47.328 17:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.328 17:33:13 -- common/autotest_common.sh@10 -- # set +x 00:16:47.587 17:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.587 17:33:13 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:47.587 17:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.587 17:33:13 -- common/autotest_common.sh@10 -- # set +x 00:16:47.587 [2024-04-18 17:33:13.876149] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:47.587 17:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.587 17:33:13 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:47.587 17:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.587 17:33:13 -- common/autotest_common.sh@10 -- # set +x 00:16:47.587 [2024-04-18 17:33:13.883887] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:47.587 [ 00:16:47.587 { 00:16:47.587 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:47.587 "subtype": "Discovery", 00:16:47.587 "listen_addresses": [], 00:16:47.587 "allow_any_host": true, 00:16:47.587 "hosts": [] 00:16:47.587 }, 00:16:47.587 { 00:16:47.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.587 "subtype": "NVMe", 00:16:47.587 "listen_addresses": [ 00:16:47.587 { 00:16:47.587 "transport": "RDMA", 00:16:47.587 "trtype": "RDMA", 00:16:47.587 "adrfam": "IPv4", 00:16:47.587 "traddr": "192.168.100.8", 00:16:47.587 "trsvcid": "4420" 00:16:47.587 } 00:16:47.587 ], 00:16:47.587 "allow_any_host": true, 00:16:47.587 "hosts": [], 00:16:47.587 "serial_number": "SPDK00000000000001", 00:16:47.587 "model_number": "SPDK bdev Controller", 00:16:47.587 "max_namespaces": 2, 00:16:47.587 "min_cntlid": 1, 00:16:47.587 "max_cntlid": 65519, 00:16:47.587 "namespaces": [ 00:16:47.587 { 00:16:47.587 "nsid": 1, 00:16:47.587 "bdev_name": "Malloc0", 00:16:47.587 "name": "Malloc0", 00:16:47.587 "nguid": "991181E49D124EA6931B6969E57D478C", 00:16:47.587 "uuid": "991181e4-9d12-4ea6-931b-6969e57d478c" 00:16:47.587 } 00:16:47.587 ] 00:16:47.587 } 00:16:47.587 ] 00:16:47.587 17:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.587 17:33:13 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:47.587 17:33:13 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:47.587 17:33:13 -- host/aer.sh@33 -- # aerpid=2034691 00:16:47.587 17:33:13 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:47.587 17:33:13 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:47.587 17:33:13 -- common/autotest_common.sh@1251 -- # local i=0 00:16:47.587 17:33:13 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:47.587 17:33:13 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:16:47.587 17:33:13 -- common/autotest_common.sh@1254 -- # i=1 00:16:47.587 17:33:13 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:16:47.587 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.587 17:33:13 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:47.587 17:33:13 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:16:47.587 17:33:13 -- common/autotest_common.sh@1254 -- # i=2 00:16:47.587 17:33:13 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:16:47.587 17:33:14 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:47.587 17:33:14 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:47.587 17:33:14 -- common/autotest_common.sh@1262 -- # return 0 00:16:47.587 17:33:14 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:47.587 17:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.587 17:33:14 -- common/autotest_common.sh@10 -- # set +x 00:16:47.848 Malloc1 00:16:47.848 17:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.848 17:33:14 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:47.848 17:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.848 17:33:14 -- common/autotest_common.sh@10 -- # set +x 00:16:47.848 17:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.848 17:33:14 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:47.848 17:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.848 17:33:14 -- common/autotest_common.sh@10 -- # set +x 00:16:47.848 [ 00:16:47.848 { 00:16:47.848 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:47.848 "subtype": "Discovery", 00:16:47.848 "listen_addresses": [], 00:16:47.848 "allow_any_host": true, 00:16:47.848 "hosts": [] 00:16:47.848 }, 00:16:47.848 { 00:16:47.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.848 "subtype": "NVMe", 00:16:47.848 "listen_addresses": [ 00:16:47.848 { 00:16:47.848 "transport": "RDMA", 00:16:47.848 "trtype": "RDMA", 00:16:47.848 "adrfam": "IPv4", 00:16:47.848 "traddr": "192.168.100.8", 00:16:47.848 "trsvcid": "4420" 00:16:47.848 } 00:16:47.848 ], 00:16:47.848 "allow_any_host": true, 00:16:47.848 "hosts": [], 00:16:47.848 "serial_number": "SPDK00000000000001", 00:16:47.848 "model_number": "SPDK bdev Controller", 00:16:47.848 "max_namespaces": 2, 00:16:47.848 "min_cntlid": 1, 00:16:47.848 "max_cntlid": 65519, 00:16:47.848 "namespaces": [ 00:16:47.848 { 00:16:47.848 "nsid": 1, 00:16:47.848 "bdev_name": "Malloc0", 00:16:47.848 "name": "Malloc0", 00:16:47.848 "nguid": "991181E49D124EA6931B6969E57D478C", 00:16:47.848 "uuid": "991181e4-9d12-4ea6-931b-6969e57d478c" 00:16:47.848 }, 00:16:47.848 { 00:16:47.848 "nsid": 2, 00:16:47.848 "bdev_name": "Malloc1", 00:16:47.848 "name": "Malloc1", 00:16:47.848 "nguid": "8F34559EB52B431F99FFEE618713688F", 00:16:47.848 "uuid": "8f34559e-b52b-431f-99ff-ee618713688f" 00:16:47.848 } 00:16:47.848 ] 00:16:47.848 } 00:16:47.848 ] 00:16:47.848 17:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.848 17:33:14 -- host/aer.sh@43 -- # wait 2034691 00:16:47.848 Asynchronous Event Request test 00:16:47.848 Attaching to 192.168.100.8 00:16:47.848 Attached to 192.168.100.8 00:16:47.848 Registering asynchronous event callbacks... 00:16:47.848 Starting namespace attribute notice tests for all controllers... 00:16:47.848 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:47.848 aer_cb - Changed Namespace 00:16:47.848 Cleaning up... 00:16:47.848 17:33:14 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:47.848 17:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.848 17:33:14 -- common/autotest_common.sh@10 -- # set +x 00:16:47.848 17:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.848 17:33:14 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:47.848 17:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.848 17:33:14 -- common/autotest_common.sh@10 -- # set +x 00:16:47.848 17:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.848 17:33:14 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.848 17:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.848 17:33:14 -- common/autotest_common.sh@10 -- # set +x 00:16:47.848 17:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.848 17:33:14 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:47.848 17:33:14 -- host/aer.sh@51 -- # nvmftestfini 00:16:47.848 17:33:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:47.848 17:33:14 -- nvmf/common.sh@117 -- # sync 00:16:47.848 17:33:14 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:47.848 17:33:14 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:47.848 17:33:14 -- nvmf/common.sh@120 -- # set +e 00:16:47.848 17:33:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:47.848 17:33:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:47.848 rmmod nvme_rdma 00:16:47.848 rmmod nvme_fabrics 00:16:47.848 17:33:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:47.848 17:33:14 -- nvmf/common.sh@124 -- # set -e 00:16:47.848 17:33:14 -- nvmf/common.sh@125 -- # return 0 00:16:47.848 17:33:14 -- nvmf/common.sh@478 -- # '[' -n 2034531 ']' 00:16:47.848 17:33:14 -- nvmf/common.sh@479 -- # killprocess 2034531 00:16:47.848 17:33:14 -- common/autotest_common.sh@936 -- # '[' -z 2034531 ']' 00:16:47.848 17:33:14 -- common/autotest_common.sh@940 -- # kill -0 2034531 00:16:47.848 17:33:14 -- common/autotest_common.sh@941 -- # uname 00:16:47.848 17:33:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:47.848 17:33:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2034531 00:16:47.848 17:33:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:47.848 17:33:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:47.848 17:33:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2034531' 00:16:47.848 killing process with pid 2034531 00:16:47.848 17:33:14 -- common/autotest_common.sh@955 -- # kill 2034531 00:16:47.848 [2024-04-18 17:33:14.345434] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:47.848 17:33:14 -- common/autotest_common.sh@960 -- # wait 2034531 00:16:48.418 17:33:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:48.418 17:33:14 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:16:48.418 00:16:48.418 real 0m4.088s 00:16:48.418 user 0m7.728s 00:16:48.418 sys 0m1.876s 00:16:48.418 17:33:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:48.418 17:33:14 -- common/autotest_common.sh@10 -- # set +x 00:16:48.418 ************************************ 00:16:48.418 END TEST nvmf_aer 00:16:48.418 ************************************ 00:16:48.418 17:33:14 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:16:48.418 17:33:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:48.418 17:33:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:48.418 17:33:14 -- common/autotest_common.sh@10 -- # set +x 00:16:48.418 ************************************ 00:16:48.418 START TEST nvmf_async_init 00:16:48.418 ************************************ 00:16:48.418 17:33:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:16:48.418 * Looking for test storage... 00:16:48.418 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:48.418 17:33:14 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:48.418 17:33:14 -- nvmf/common.sh@7 -- # uname -s 00:16:48.418 17:33:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.418 17:33:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.418 17:33:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.418 17:33:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.418 17:33:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.418 17:33:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.418 17:33:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.418 17:33:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.418 17:33:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.418 17:33:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.418 17:33:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.418 17:33:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.418 17:33:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.418 17:33:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.418 17:33:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:48.418 17:33:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.418 17:33:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:48.418 17:33:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.418 17:33:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.418 17:33:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.418 17:33:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.418 17:33:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.418 17:33:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.418 17:33:14 -- paths/export.sh@5 -- # export PATH 00:16:48.418 17:33:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.418 17:33:14 -- nvmf/common.sh@47 -- # : 0 00:16:48.418 17:33:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:48.418 17:33:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:48.418 17:33:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.418 17:33:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.418 17:33:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.418 17:33:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:48.418 17:33:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:48.418 17:33:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:48.418 17:33:14 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:16:48.418 17:33:14 -- host/async_init.sh@14 -- # null_block_size=512 00:16:48.418 17:33:14 -- host/async_init.sh@15 -- # null_bdev=null0 00:16:48.418 17:33:14 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:16:48.418 17:33:14 -- host/async_init.sh@20 -- # uuidgen 00:16:48.418 17:33:14 -- host/async_init.sh@20 -- # tr -d - 00:16:48.418 17:33:14 -- host/async_init.sh@20 -- # nguid=e9771a0724fc40688e1da0040b72f35a 00:16:48.418 17:33:14 -- host/async_init.sh@22 -- # nvmftestinit 00:16:48.418 17:33:14 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:16:48.418 17:33:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.418 17:33:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:48.418 17:33:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:48.418 17:33:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:48.418 17:33:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.418 17:33:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.418 17:33:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.418 17:33:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:48.418 17:33:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:48.418 17:33:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:48.418 17:33:14 -- common/autotest_common.sh@10 -- # set +x 00:16:50.343 17:33:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:50.343 17:33:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:50.343 17:33:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:50.343 17:33:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:50.343 17:33:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:50.343 17:33:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:50.343 17:33:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:50.343 17:33:16 -- nvmf/common.sh@295 -- # net_devs=() 00:16:50.343 17:33:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:50.343 17:33:16 -- nvmf/common.sh@296 -- # e810=() 00:16:50.343 17:33:16 -- nvmf/common.sh@296 -- # local -ga e810 00:16:50.343 17:33:16 -- nvmf/common.sh@297 -- # x722=() 00:16:50.343 17:33:16 -- nvmf/common.sh@297 -- # local -ga x722 00:16:50.343 17:33:16 -- nvmf/common.sh@298 -- # mlx=() 00:16:50.343 17:33:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:50.343 17:33:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:50.343 17:33:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:50.343 17:33:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:50.343 17:33:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:50.343 17:33:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:50.343 17:33:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:50.343 17:33:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:50.343 17:33:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:50.343 17:33:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:50.343 17:33:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:50.343 17:33:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:50.343 17:33:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:50.343 17:33:16 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:50.343 17:33:16 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:50.343 17:33:16 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:50.344 17:33:16 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:50.344 17:33:16 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:50.344 17:33:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:50.344 17:33:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.344 17:33:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:16:50.344 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:16:50.344 17:33:16 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:50.344 17:33:16 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:50.344 17:33:16 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:50.344 17:33:16 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:50.344 17:33:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.344 17:33:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:16:50.344 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:16:50.344 17:33:16 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:50.344 17:33:16 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:50.344 17:33:16 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:50.344 17:33:16 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:50.344 17:33:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:50.344 17:33:16 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:50.344 17:33:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.344 17:33:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.344 17:33:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:50.344 17:33:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.344 17:33:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:16:50.344 Found net devices under 0000:09:00.0: mlx_0_0 00:16:50.344 17:33:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.344 17:33:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.344 17:33:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.344 17:33:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:50.344 17:33:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.344 17:33:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:16:50.344 Found net devices under 0000:09:00.1: mlx_0_1 00:16:50.344 17:33:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.344 17:33:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:50.344 17:33:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:50.344 17:33:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:50.344 17:33:16 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:16:50.344 17:33:16 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:16:50.344 17:33:16 -- nvmf/common.sh@409 -- # rdma_device_init 00:16:50.344 17:33:16 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:16:50.344 17:33:16 -- nvmf/common.sh@58 -- # uname 00:16:50.344 17:33:16 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:50.344 17:33:16 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:50.344 17:33:16 -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:50.344 17:33:16 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:50.344 17:33:16 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:50.344 17:33:16 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:50.344 17:33:16 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:50.344 17:33:16 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:50.344 17:33:16 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:16:50.344 17:33:16 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:50.344 17:33:16 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:50.344 17:33:16 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:50.344 17:33:16 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:50.344 17:33:16 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:50.344 17:33:16 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:50.344 17:33:16 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:50.344 17:33:16 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:50.344 17:33:16 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:50.344 17:33:16 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:50.344 17:33:16 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:50.344 17:33:16 -- nvmf/common.sh@105 -- # continue 2 00:16:50.344 17:33:16 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:50.344 17:33:16 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:50.344 17:33:16 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:50.344 17:33:16 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:50.344 17:33:16 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:50.344 17:33:16 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:50.344 17:33:16 -- nvmf/common.sh@105 -- # continue 2 00:16:50.344 17:33:16 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:50.344 17:33:16 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:50.344 17:33:16 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:50.344 17:33:16 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:50.344 17:33:16 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:50.344 17:33:16 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:50.344 17:33:16 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:50.344 17:33:16 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:50.344 17:33:16 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:50.344 82: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:50.344 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:16:50.344 altname enp9s0f0np0 00:16:50.344 inet 192.168.100.8/24 scope global mlx_0_0 00:16:50.344 valid_lft forever preferred_lft forever 00:16:50.344 17:33:16 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:50.344 17:33:16 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:50.344 17:33:16 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:50.344 17:33:16 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:50.344 17:33:16 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:50.344 17:33:16 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:50.344 17:33:16 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:50.344 17:33:16 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:50.344 17:33:16 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:50.602 83: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:50.602 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:16:50.602 altname enp9s0f1np1 00:16:50.602 inet 192.168.100.9/24 scope global mlx_0_1 00:16:50.602 valid_lft forever preferred_lft forever 00:16:50.602 17:33:16 -- nvmf/common.sh@411 -- # return 0 00:16:50.602 17:33:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:50.602 17:33:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:50.602 17:33:16 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:16:50.602 17:33:16 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:16:50.602 17:33:16 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:50.602 17:33:16 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:50.602 17:33:16 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:50.602 17:33:16 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:50.602 17:33:16 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:50.602 17:33:16 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:50.602 17:33:16 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:50.602 17:33:16 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:50.602 17:33:16 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:50.602 17:33:16 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:50.602 17:33:16 -- nvmf/common.sh@105 -- # continue 2 00:16:50.602 17:33:16 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:50.602 17:33:16 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:50.602 17:33:16 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:50.602 17:33:16 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:50.602 17:33:16 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:50.602 17:33:16 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:50.602 17:33:16 -- nvmf/common.sh@105 -- # continue 2 00:16:50.602 17:33:16 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:50.602 17:33:16 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:50.602 17:33:16 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:50.602 17:33:16 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:50.602 17:33:16 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:50.602 17:33:16 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:50.602 17:33:16 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:50.602 17:33:16 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:50.602 17:33:16 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:50.602 17:33:16 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:50.602 17:33:16 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:50.602 17:33:16 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:50.602 17:33:16 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:16:50.602 192.168.100.9' 00:16:50.602 17:33:16 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:50.602 192.168.100.9' 00:16:50.602 17:33:16 -- nvmf/common.sh@446 -- # head -n 1 00:16:50.602 17:33:16 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:50.602 17:33:16 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:16:50.602 192.168.100.9' 00:16:50.602 17:33:16 -- nvmf/common.sh@447 -- # tail -n +2 00:16:50.602 17:33:16 -- nvmf/common.sh@447 -- # head -n 1 00:16:50.602 17:33:16 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:50.602 17:33:16 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:16:50.602 17:33:16 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:50.602 17:33:16 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:16:50.602 17:33:16 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:16:50.602 17:33:16 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:16:50.602 17:33:16 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:16:50.602 17:33:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:50.602 17:33:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:50.602 17:33:16 -- common/autotest_common.sh@10 -- # set +x 00:16:50.602 17:33:16 -- nvmf/common.sh@470 -- # nvmfpid=2036493 00:16:50.602 17:33:16 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:50.602 17:33:16 -- nvmf/common.sh@471 -- # waitforlisten 2036493 00:16:50.602 17:33:16 -- common/autotest_common.sh@817 -- # '[' -z 2036493 ']' 00:16:50.602 17:33:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.602 17:33:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:50.602 17:33:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.602 17:33:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:50.602 17:33:16 -- common/autotest_common.sh@10 -- # set +x 00:16:50.602 [2024-04-18 17:33:16.972133] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:16:50.602 [2024-04-18 17:33:16.972204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.602 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.603 [2024-04-18 17:33:17.028049] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.603 [2024-04-18 17:33:17.131890] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.603 [2024-04-18 17:33:17.131955] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.603 [2024-04-18 17:33:17.131987] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.603 [2024-04-18 17:33:17.131999] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.603 [2024-04-18 17:33:17.132009] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.603 [2024-04-18 17:33:17.132051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.862 17:33:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:50.862 17:33:17 -- common/autotest_common.sh@850 -- # return 0 00:16:50.862 17:33:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:50.862 17:33:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:50.862 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:50.862 17:33:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.862 17:33:17 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:16:50.862 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.862 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:50.862 [2024-04-18 17:33:17.293775] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16949b0/0x1698ea0) succeed. 00:16:50.862 [2024-04-18 17:33:17.305629] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1695eb0/0x16da530) succeed. 00:16:50.862 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.862 17:33:17 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:16:50.862 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.862 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:50.862 null0 00:16:50.862 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.862 17:33:17 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:16:50.862 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.862 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:50.862 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.862 17:33:17 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:16:50.862 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.862 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:50.862 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.862 17:33:17 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e9771a0724fc40688e1da0040b72f35a 00:16:50.862 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.862 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:50.862 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.862 17:33:17 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:50.862 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.862 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:50.862 [2024-04-18 17:33:17.393011] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:50.862 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.862 17:33:17 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:16:50.862 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.862 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.127 nvme0n1 00:16:51.127 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.127 17:33:17 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:51.127 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.127 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.127 [ 00:16:51.127 { 00:16:51.127 "name": "nvme0n1", 00:16:51.127 "aliases": [ 00:16:51.127 "e9771a07-24fc-4068-8e1d-a0040b72f35a" 00:16:51.127 ], 00:16:51.127 "product_name": "NVMe disk", 00:16:51.127 "block_size": 512, 00:16:51.127 "num_blocks": 2097152, 00:16:51.127 "uuid": "e9771a07-24fc-4068-8e1d-a0040b72f35a", 00:16:51.127 "assigned_rate_limits": { 00:16:51.127 "rw_ios_per_sec": 0, 00:16:51.127 "rw_mbytes_per_sec": 0, 00:16:51.127 "r_mbytes_per_sec": 0, 00:16:51.127 "w_mbytes_per_sec": 0 00:16:51.127 }, 00:16:51.127 "claimed": false, 00:16:51.127 "zoned": false, 00:16:51.127 "supported_io_types": { 00:16:51.127 "read": true, 00:16:51.127 "write": true, 00:16:51.127 "unmap": false, 00:16:51.127 "write_zeroes": true, 00:16:51.127 "flush": true, 00:16:51.127 "reset": true, 00:16:51.127 "compare": true, 00:16:51.127 "compare_and_write": true, 00:16:51.127 "abort": true, 00:16:51.127 "nvme_admin": true, 00:16:51.127 "nvme_io": true 00:16:51.127 }, 00:16:51.127 "memory_domains": [ 00:16:51.127 { 00:16:51.127 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:51.127 "dma_device_type": 0 00:16:51.127 } 00:16:51.127 ], 00:16:51.127 "driver_specific": { 00:16:51.127 "nvme": [ 00:16:51.127 { 00:16:51.127 "trid": { 00:16:51.127 "trtype": "RDMA", 00:16:51.127 "adrfam": "IPv4", 00:16:51.127 "traddr": "192.168.100.8", 00:16:51.127 "trsvcid": "4420", 00:16:51.127 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:51.127 }, 00:16:51.127 "ctrlr_data": { 00:16:51.127 "cntlid": 1, 00:16:51.127 "vendor_id": "0x8086", 00:16:51.127 "model_number": "SPDK bdev Controller", 00:16:51.127 "serial_number": "00000000000000000000", 00:16:51.127 "firmware_revision": "24.05", 00:16:51.127 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:51.127 "oacs": { 00:16:51.127 "security": 0, 00:16:51.127 "format": 0, 00:16:51.127 "firmware": 0, 00:16:51.127 "ns_manage": 0 00:16:51.127 }, 00:16:51.127 "multi_ctrlr": true, 00:16:51.127 "ana_reporting": false 00:16:51.127 }, 00:16:51.127 "vs": { 00:16:51.127 "nvme_version": "1.3" 00:16:51.127 }, 00:16:51.127 "ns_data": { 00:16:51.127 "id": 1, 00:16:51.127 "can_share": true 00:16:51.127 } 00:16:51.127 } 00:16:51.127 ], 00:16:51.127 "mp_policy": "active_passive" 00:16:51.127 } 00:16:51.127 } 00:16:51.127 ] 00:16:51.127 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.127 17:33:17 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:51.127 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.127 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.127 [2024-04-18 17:33:17.510554] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:51.127 [2024-04-18 17:33:17.532706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:51.127 [2024-04-18 17:33:17.557891] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:51.127 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.127 17:33:17 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:51.127 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.127 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.127 [ 00:16:51.127 { 00:16:51.127 "name": "nvme0n1", 00:16:51.127 "aliases": [ 00:16:51.127 "e9771a07-24fc-4068-8e1d-a0040b72f35a" 00:16:51.127 ], 00:16:51.127 "product_name": "NVMe disk", 00:16:51.127 "block_size": 512, 00:16:51.127 "num_blocks": 2097152, 00:16:51.127 "uuid": "e9771a07-24fc-4068-8e1d-a0040b72f35a", 00:16:51.127 "assigned_rate_limits": { 00:16:51.127 "rw_ios_per_sec": 0, 00:16:51.127 "rw_mbytes_per_sec": 0, 00:16:51.127 "r_mbytes_per_sec": 0, 00:16:51.127 "w_mbytes_per_sec": 0 00:16:51.127 }, 00:16:51.127 "claimed": false, 00:16:51.127 "zoned": false, 00:16:51.127 "supported_io_types": { 00:16:51.127 "read": true, 00:16:51.127 "write": true, 00:16:51.127 "unmap": false, 00:16:51.127 "write_zeroes": true, 00:16:51.127 "flush": true, 00:16:51.127 "reset": true, 00:16:51.127 "compare": true, 00:16:51.127 "compare_and_write": true, 00:16:51.127 "abort": true, 00:16:51.127 "nvme_admin": true, 00:16:51.127 "nvme_io": true 00:16:51.127 }, 00:16:51.127 "memory_domains": [ 00:16:51.127 { 00:16:51.127 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:51.127 "dma_device_type": 0 00:16:51.127 } 00:16:51.127 ], 00:16:51.127 "driver_specific": { 00:16:51.127 "nvme": [ 00:16:51.127 { 00:16:51.127 "trid": { 00:16:51.127 "trtype": "RDMA", 00:16:51.127 "adrfam": "IPv4", 00:16:51.127 "traddr": "192.168.100.8", 00:16:51.127 "trsvcid": "4420", 00:16:51.127 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:51.127 }, 00:16:51.127 "ctrlr_data": { 00:16:51.127 "cntlid": 2, 00:16:51.127 "vendor_id": "0x8086", 00:16:51.127 "model_number": "SPDK bdev Controller", 00:16:51.127 "serial_number": "00000000000000000000", 00:16:51.127 "firmware_revision": "24.05", 00:16:51.127 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:51.127 "oacs": { 00:16:51.127 "security": 0, 00:16:51.127 "format": 0, 00:16:51.127 "firmware": 0, 00:16:51.127 "ns_manage": 0 00:16:51.127 }, 00:16:51.127 "multi_ctrlr": true, 00:16:51.127 "ana_reporting": false 00:16:51.127 }, 00:16:51.128 "vs": { 00:16:51.128 "nvme_version": "1.3" 00:16:51.128 }, 00:16:51.128 "ns_data": { 00:16:51.128 "id": 1, 00:16:51.128 "can_share": true 00:16:51.128 } 00:16:51.128 } 00:16:51.128 ], 00:16:51.128 "mp_policy": "active_passive" 00:16:51.128 } 00:16:51.128 } 00:16:51.128 ] 00:16:51.128 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.128 17:33:17 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.128 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.128 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.128 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.128 17:33:17 -- host/async_init.sh@53 -- # mktemp 00:16:51.128 17:33:17 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Ry5VkD7BXj 00:16:51.128 17:33:17 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:51.128 17:33:17 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Ry5VkD7BXj 00:16:51.128 17:33:17 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:16:51.128 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.128 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.128 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.128 17:33:17 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:16:51.128 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.128 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.128 [2024-04-18 17:33:17.620522] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:16:51.128 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.128 17:33:17 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ry5VkD7BXj 00:16:51.128 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.128 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.128 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.128 17:33:17 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ry5VkD7BXj 00:16:51.128 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.128 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.128 [2024-04-18 17:33:17.636523] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:51.386 nvme0n1 00:16:51.386 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.386 17:33:17 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:51.386 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.386 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.386 [ 00:16:51.386 { 00:16:51.386 "name": "nvme0n1", 00:16:51.386 "aliases": [ 00:16:51.386 "e9771a07-24fc-4068-8e1d-a0040b72f35a" 00:16:51.386 ], 00:16:51.386 "product_name": "NVMe disk", 00:16:51.386 "block_size": 512, 00:16:51.386 "num_blocks": 2097152, 00:16:51.386 "uuid": "e9771a07-24fc-4068-8e1d-a0040b72f35a", 00:16:51.386 "assigned_rate_limits": { 00:16:51.386 "rw_ios_per_sec": 0, 00:16:51.386 "rw_mbytes_per_sec": 0, 00:16:51.386 "r_mbytes_per_sec": 0, 00:16:51.386 "w_mbytes_per_sec": 0 00:16:51.386 }, 00:16:51.386 "claimed": false, 00:16:51.386 "zoned": false, 00:16:51.386 "supported_io_types": { 00:16:51.386 "read": true, 00:16:51.386 "write": true, 00:16:51.386 "unmap": false, 00:16:51.386 "write_zeroes": true, 00:16:51.386 "flush": true, 00:16:51.386 "reset": true, 00:16:51.386 "compare": true, 00:16:51.386 "compare_and_write": true, 00:16:51.386 "abort": true, 00:16:51.386 "nvme_admin": true, 00:16:51.386 "nvme_io": true 00:16:51.386 }, 00:16:51.386 "memory_domains": [ 00:16:51.386 { 00:16:51.386 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:51.386 "dma_device_type": 0 00:16:51.386 } 00:16:51.386 ], 00:16:51.386 "driver_specific": { 00:16:51.386 "nvme": [ 00:16:51.386 { 00:16:51.386 "trid": { 00:16:51.386 "trtype": "RDMA", 00:16:51.386 "adrfam": "IPv4", 00:16:51.386 "traddr": "192.168.100.8", 00:16:51.386 "trsvcid": "4421", 00:16:51.386 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:51.386 }, 00:16:51.386 "ctrlr_data": { 00:16:51.386 "cntlid": 3, 00:16:51.386 "vendor_id": "0x8086", 00:16:51.386 "model_number": "SPDK bdev Controller", 00:16:51.386 "serial_number": "00000000000000000000", 00:16:51.386 "firmware_revision": "24.05", 00:16:51.386 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:51.386 "oacs": { 00:16:51.386 "security": 0, 00:16:51.386 "format": 0, 00:16:51.386 "firmware": 0, 00:16:51.386 "ns_manage": 0 00:16:51.386 }, 00:16:51.386 "multi_ctrlr": true, 00:16:51.386 "ana_reporting": false 00:16:51.386 }, 00:16:51.386 "vs": { 00:16:51.386 "nvme_version": "1.3" 00:16:51.386 }, 00:16:51.386 "ns_data": { 00:16:51.386 "id": 1, 00:16:51.386 "can_share": true 00:16:51.386 } 00:16:51.386 } 00:16:51.386 ], 00:16:51.386 "mp_policy": "active_passive" 00:16:51.386 } 00:16:51.386 } 00:16:51.386 ] 00:16:51.386 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.386 17:33:17 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.386 17:33:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.386 17:33:17 -- common/autotest_common.sh@10 -- # set +x 00:16:51.386 17:33:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.386 17:33:17 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Ry5VkD7BXj 00:16:51.386 17:33:17 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:16:51.386 17:33:17 -- host/async_init.sh@78 -- # nvmftestfini 00:16:51.386 17:33:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:51.386 17:33:17 -- nvmf/common.sh@117 -- # sync 00:16:51.386 17:33:17 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:51.386 17:33:17 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:51.386 17:33:17 -- nvmf/common.sh@120 -- # set +e 00:16:51.386 17:33:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.386 17:33:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:51.386 rmmod nvme_rdma 00:16:51.386 rmmod nvme_fabrics 00:16:51.386 17:33:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.386 17:33:17 -- nvmf/common.sh@124 -- # set -e 00:16:51.386 17:33:17 -- nvmf/common.sh@125 -- # return 0 00:16:51.386 17:33:17 -- nvmf/common.sh@478 -- # '[' -n 2036493 ']' 00:16:51.386 17:33:17 -- nvmf/common.sh@479 -- # killprocess 2036493 00:16:51.386 17:33:17 -- common/autotest_common.sh@936 -- # '[' -z 2036493 ']' 00:16:51.386 17:33:17 -- common/autotest_common.sh@940 -- # kill -0 2036493 00:16:51.386 17:33:17 -- common/autotest_common.sh@941 -- # uname 00:16:51.386 17:33:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:51.386 17:33:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2036493 00:16:51.386 17:33:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:51.386 17:33:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:51.386 17:33:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2036493' 00:16:51.386 killing process with pid 2036493 00:16:51.386 17:33:17 -- common/autotest_common.sh@955 -- # kill 2036493 00:16:51.386 17:33:17 -- common/autotest_common.sh@960 -- # wait 2036493 00:16:51.644 17:33:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:51.644 17:33:18 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:16:51.644 00:16:51.644 real 0m3.309s 00:16:51.644 user 0m1.983s 00:16:51.644 sys 0m1.724s 00:16:51.644 17:33:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:51.644 17:33:18 -- common/autotest_common.sh@10 -- # set +x 00:16:51.644 ************************************ 00:16:51.644 END TEST nvmf_async_init 00:16:51.644 ************************************ 00:16:51.644 17:33:18 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:16:51.644 17:33:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:51.644 17:33:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:51.644 17:33:18 -- common/autotest_common.sh@10 -- # set +x 00:16:51.903 ************************************ 00:16:51.903 START TEST dma 00:16:51.903 ************************************ 00:16:51.903 17:33:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:16:51.903 * Looking for test storage... 00:16:51.903 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:51.903 17:33:18 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.903 17:33:18 -- nvmf/common.sh@7 -- # uname -s 00:16:51.903 17:33:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.903 17:33:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.903 17:33:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.903 17:33:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.903 17:33:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.903 17:33:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.903 17:33:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.903 17:33:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.903 17:33:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.903 17:33:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.903 17:33:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.903 17:33:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.903 17:33:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.903 17:33:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.903 17:33:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.903 17:33:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.903 17:33:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:51.903 17:33:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.903 17:33:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.903 17:33:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.903 17:33:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.903 17:33:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.903 17:33:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.903 17:33:18 -- paths/export.sh@5 -- # export PATH 00:16:51.903 17:33:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.903 17:33:18 -- nvmf/common.sh@47 -- # : 0 00:16:51.903 17:33:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.903 17:33:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.903 17:33:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.903 17:33:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.903 17:33:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.903 17:33:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.903 17:33:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.903 17:33:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.903 17:33:18 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:16:51.903 17:33:18 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:16:51.903 17:33:18 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:16:51.903 17:33:18 -- host/dma.sh@18 -- # subsystem=0 00:16:51.903 17:33:18 -- host/dma.sh@93 -- # nvmftestinit 00:16:51.903 17:33:18 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:16:51.903 17:33:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.903 17:33:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:51.903 17:33:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:51.903 17:33:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:51.903 17:33:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.903 17:33:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.903 17:33:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.903 17:33:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:51.903 17:33:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:51.903 17:33:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:51.903 17:33:18 -- common/autotest_common.sh@10 -- # set +x 00:16:53.804 17:33:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:53.804 17:33:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:53.804 17:33:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:53.804 17:33:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:53.804 17:33:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:53.804 17:33:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:53.804 17:33:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:53.804 17:33:20 -- nvmf/common.sh@295 -- # net_devs=() 00:16:53.804 17:33:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:53.804 17:33:20 -- nvmf/common.sh@296 -- # e810=() 00:16:53.804 17:33:20 -- nvmf/common.sh@296 -- # local -ga e810 00:16:53.804 17:33:20 -- nvmf/common.sh@297 -- # x722=() 00:16:53.804 17:33:20 -- nvmf/common.sh@297 -- # local -ga x722 00:16:53.804 17:33:20 -- nvmf/common.sh@298 -- # mlx=() 00:16:53.804 17:33:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:53.804 17:33:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.804 17:33:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.804 17:33:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.804 17:33:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.804 17:33:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.804 17:33:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.804 17:33:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.804 17:33:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.804 17:33:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.804 17:33:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.804 17:33:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.804 17:33:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:53.804 17:33:20 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:53.804 17:33:20 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:53.804 17:33:20 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:53.804 17:33:20 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:53.804 17:33:20 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:53.804 17:33:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:53.804 17:33:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.804 17:33:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:16:53.804 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:16:53.804 17:33:20 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:53.804 17:33:20 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:53.804 17:33:20 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:53.804 17:33:20 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:53.804 17:33:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.804 17:33:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:16:53.804 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:16:53.804 17:33:20 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:53.804 17:33:20 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:53.804 17:33:20 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:16:53.804 17:33:20 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:53.804 17:33:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:53.804 17:33:20 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:53.804 17:33:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.804 17:33:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.804 17:33:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:53.804 17:33:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.804 17:33:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:16:53.804 Found net devices under 0000:09:00.0: mlx_0_0 00:16:53.804 17:33:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.804 17:33:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.804 17:33:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.804 17:33:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:53.804 17:33:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.804 17:33:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:16:53.804 Found net devices under 0000:09:00.1: mlx_0_1 00:16:53.804 17:33:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.804 17:33:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:53.804 17:33:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:53.804 17:33:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:53.804 17:33:20 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:16:53.804 17:33:20 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:16:53.804 17:33:20 -- nvmf/common.sh@409 -- # rdma_device_init 00:16:53.804 17:33:20 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:16:53.804 17:33:20 -- nvmf/common.sh@58 -- # uname 00:16:53.804 17:33:20 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:53.804 17:33:20 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:53.804 17:33:20 -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:53.804 17:33:20 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:53.804 17:33:20 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:53.804 17:33:20 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:53.804 17:33:20 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:53.804 17:33:20 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:53.804 17:33:20 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:16:53.804 17:33:20 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:53.804 17:33:20 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:53.804 17:33:20 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:53.804 17:33:20 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:53.804 17:33:20 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:53.804 17:33:20 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:53.804 17:33:20 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:53.804 17:33:20 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:53.804 17:33:20 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:53.804 17:33:20 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:53.804 17:33:20 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:53.804 17:33:20 -- nvmf/common.sh@105 -- # continue 2 00:16:53.804 17:33:20 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:53.804 17:33:20 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:53.804 17:33:20 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:53.804 17:33:20 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:53.804 17:33:20 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:53.804 17:33:20 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:53.804 17:33:20 -- nvmf/common.sh@105 -- # continue 2 00:16:53.804 17:33:20 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:53.804 17:33:20 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:53.804 17:33:20 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:53.804 17:33:20 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:53.804 17:33:20 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:53.804 17:33:20 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:53.804 17:33:20 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:53.804 17:33:20 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:53.804 17:33:20 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:53.804 82: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:53.804 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:16:53.804 altname enp9s0f0np0 00:16:53.804 inet 192.168.100.8/24 scope global mlx_0_0 00:16:53.805 valid_lft forever preferred_lft forever 00:16:53.805 17:33:20 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:53.805 17:33:20 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:53.805 17:33:20 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:53.805 17:33:20 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:53.805 17:33:20 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:53.805 17:33:20 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:53.805 17:33:20 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:54.063 17:33:20 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:54.063 17:33:20 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:54.063 83: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:54.063 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:16:54.063 altname enp9s0f1np1 00:16:54.063 inet 192.168.100.9/24 scope global mlx_0_1 00:16:54.063 valid_lft forever preferred_lft forever 00:16:54.063 17:33:20 -- nvmf/common.sh@411 -- # return 0 00:16:54.063 17:33:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:54.063 17:33:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:54.063 17:33:20 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:16:54.063 17:33:20 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:16:54.063 17:33:20 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:54.063 17:33:20 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:54.063 17:33:20 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:54.063 17:33:20 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:54.063 17:33:20 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:54.063 17:33:20 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:54.063 17:33:20 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:54.063 17:33:20 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:54.063 17:33:20 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:54.063 17:33:20 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:54.063 17:33:20 -- nvmf/common.sh@105 -- # continue 2 00:16:54.063 17:33:20 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:54.063 17:33:20 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:54.063 17:33:20 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:54.063 17:33:20 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:54.063 17:33:20 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:54.063 17:33:20 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:54.063 17:33:20 -- nvmf/common.sh@105 -- # continue 2 00:16:54.063 17:33:20 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:54.063 17:33:20 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:54.063 17:33:20 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:54.063 17:33:20 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:54.063 17:33:20 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:54.063 17:33:20 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:54.063 17:33:20 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:54.063 17:33:20 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:54.063 17:33:20 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:54.063 17:33:20 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:54.063 17:33:20 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:54.063 17:33:20 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:54.063 17:33:20 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:16:54.063 192.168.100.9' 00:16:54.063 17:33:20 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:54.063 192.168.100.9' 00:16:54.063 17:33:20 -- nvmf/common.sh@446 -- # head -n 1 00:16:54.063 17:33:20 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:54.063 17:33:20 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:16:54.063 192.168.100.9' 00:16:54.063 17:33:20 -- nvmf/common.sh@447 -- # tail -n +2 00:16:54.063 17:33:20 -- nvmf/common.sh@447 -- # head -n 1 00:16:54.063 17:33:20 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:54.063 17:33:20 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:16:54.063 17:33:20 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:54.063 17:33:20 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:16:54.063 17:33:20 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:16:54.063 17:33:20 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:16:54.063 17:33:20 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:16:54.063 17:33:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:54.063 17:33:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:54.063 17:33:20 -- common/autotest_common.sh@10 -- # set +x 00:16:54.063 17:33:20 -- nvmf/common.sh@470 -- # nvmfpid=2038307 00:16:54.063 17:33:20 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:54.063 17:33:20 -- nvmf/common.sh@471 -- # waitforlisten 2038307 00:16:54.063 17:33:20 -- common/autotest_common.sh@817 -- # '[' -z 2038307 ']' 00:16:54.063 17:33:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.063 17:33:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:54.063 17:33:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.063 17:33:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:54.063 17:33:20 -- common/autotest_common.sh@10 -- # set +x 00:16:54.063 [2024-04-18 17:33:20.436700] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:16:54.063 [2024-04-18 17:33:20.436776] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.063 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.063 [2024-04-18 17:33:20.498101] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:54.322 [2024-04-18 17:33:20.613446] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.322 [2024-04-18 17:33:20.613502] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.322 [2024-04-18 17:33:20.613530] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.322 [2024-04-18 17:33:20.613541] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.322 [2024-04-18 17:33:20.613551] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.322 [2024-04-18 17:33:20.617409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.322 [2024-04-18 17:33:20.617422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.322 17:33:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:54.322 17:33:20 -- common/autotest_common.sh@850 -- # return 0 00:16:54.322 17:33:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:54.322 17:33:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:54.322 17:33:20 -- common/autotest_common.sh@10 -- # set +x 00:16:54.322 17:33:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.322 17:33:20 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:16:54.322 17:33:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.322 17:33:20 -- common/autotest_common.sh@10 -- # set +x 00:16:54.322 [2024-04-18 17:33:20.793709] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1486530/0x148aa20) succeed. 00:16:54.322 [2024-04-18 17:33:20.805364] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1487a30/0x14cc0b0) succeed. 00:16:54.582 17:33:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.582 17:33:20 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:16:54.582 17:33:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.582 17:33:20 -- common/autotest_common.sh@10 -- # set +x 00:16:54.582 Malloc0 00:16:54.582 17:33:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.582 17:33:20 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:16:54.582 17:33:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.582 17:33:20 -- common/autotest_common.sh@10 -- # set +x 00:16:54.582 17:33:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.582 17:33:20 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:16:54.582 17:33:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.582 17:33:20 -- common/autotest_common.sh@10 -- # set +x 00:16:54.582 17:33:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.582 17:33:20 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:54.582 17:33:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.582 17:33:20 -- common/autotest_common.sh@10 -- # set +x 00:16:54.582 [2024-04-18 17:33:20.998012] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:54.582 17:33:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.582 17:33:21 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:16:54.582 17:33:21 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:16:54.582 17:33:21 -- nvmf/common.sh@521 -- # config=() 00:16:54.582 17:33:21 -- nvmf/common.sh@521 -- # local subsystem config 00:16:54.582 17:33:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:54.582 17:33:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:54.582 { 00:16:54.582 "params": { 00:16:54.582 "name": "Nvme$subsystem", 00:16:54.582 "trtype": "$TEST_TRANSPORT", 00:16:54.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.582 "adrfam": "ipv4", 00:16:54.582 "trsvcid": "$NVMF_PORT", 00:16:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.582 "hdgst": ${hdgst:-false}, 00:16:54.582 "ddgst": ${ddgst:-false} 00:16:54.582 }, 00:16:54.582 "method": "bdev_nvme_attach_controller" 00:16:54.582 } 00:16:54.582 EOF 00:16:54.582 )") 00:16:54.582 17:33:21 -- nvmf/common.sh@543 -- # cat 00:16:54.582 17:33:21 -- nvmf/common.sh@545 -- # jq . 00:16:54.582 17:33:21 -- nvmf/common.sh@546 -- # IFS=, 00:16:54.582 17:33:21 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:54.582 "params": { 00:16:54.582 "name": "Nvme0", 00:16:54.582 "trtype": "rdma", 00:16:54.582 "traddr": "192.168.100.8", 00:16:54.582 "adrfam": "ipv4", 00:16:54.582 "trsvcid": "4420", 00:16:54.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:54.582 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:54.582 "hdgst": false, 00:16:54.582 "ddgst": false 00:16:54.582 }, 00:16:54.582 "method": "bdev_nvme_attach_controller" 00:16:54.582 }' 00:16:54.582 [2024-04-18 17:33:21.046682] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:16:54.582 [2024-04-18 17:33:21.046768] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2038380 ] 00:16:54.582 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.582 [2024-04-18 17:33:21.110146] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:54.841 [2024-04-18 17:33:21.220354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.841 [2024-04-18 17:33:21.220358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.408 bdev Nvme0n1 reports 1 memory domains 00:17:01.408 bdev Nvme0n1 supports RDMA memory domain 00:17:01.408 Initialization complete, running randrw IO for 5 sec on 2 cores 00:17:01.408 ========================================================================== 00:17:01.408 Latency [us] 00:17:01.408 IOPS MiB/s Average min max 00:17:01.408 Core 2: 19003.95 74.23 841.11 314.53 6620.23 00:17:01.408 Core 3: 19184.12 74.94 833.18 289.81 6680.49 00:17:01.408 ========================================================================== 00:17:01.408 Total : 38188.07 149.17 837.13 289.81 6680.49 00:17:01.408 00:17:01.408 Total operations: 190973, translate 190973 pull_push 0 memzero 0 00:17:01.408 17:33:26 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:17:01.408 17:33:26 -- host/dma.sh@107 -- # gen_malloc_json 00:17:01.408 17:33:26 -- host/dma.sh@21 -- # jq . 00:17:01.408 [2024-04-18 17:33:26.752740] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:17:01.408 [2024-04-18 17:33:26.752816] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2039111 ] 00:17:01.408 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.408 [2024-04-18 17:33:26.812371] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:01.408 [2024-04-18 17:33:26.916046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:01.408 [2024-04-18 17:33:26.916049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.698 bdev Malloc0 reports 2 memory domains 00:17:06.698 bdev Malloc0 doesn't support RDMA memory domain 00:17:06.698 Initialization complete, running randrw IO for 5 sec on 2 cores 00:17:06.698 ========================================================================== 00:17:06.698 Latency [us] 00:17:06.698 IOPS MiB/s Average min max 00:17:06.698 Core 2: 12346.52 48.23 1294.92 441.38 2199.40 00:17:06.698 Core 3: 12500.65 48.83 1278.94 486.81 2275.91 00:17:06.698 ========================================================================== 00:17:06.698 Total : 24847.17 97.06 1286.88 441.38 2275.91 00:17:06.698 00:17:06.698 Total operations: 124287, translate 0 pull_push 497148 memzero 0 00:17:06.698 17:33:32 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:17:06.698 17:33:32 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:17:06.698 17:33:32 -- host/dma.sh@48 -- # local subsystem=0 00:17:06.698 17:33:32 -- host/dma.sh@50 -- # jq . 00:17:06.698 Ignoring -M option 00:17:06.698 [2024-04-18 17:33:32.395692] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:17:06.698 [2024-04-18 17:33:32.395775] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2039773 ] 00:17:06.698 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.698 [2024-04-18 17:33:32.455574] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:06.698 [2024-04-18 17:33:32.562963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.698 [2024-04-18 17:33:32.562967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.698 [2024-04-18 17:33:32.822972] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:17:11.970 [2024-04-18 17:33:37.853417] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:17:11.970 bdev ffdaebe2-c928-410e-9e6a-75ebe055286e reports 1 memory domains 00:17:11.970 bdev ffdaebe2-c928-410e-9e6a-75ebe055286e supports RDMA memory domain 00:17:11.970 Initialization complete, running randread IO for 5 sec on 2 cores 00:17:11.970 ========================================================================== 00:17:11.970 Latency [us] 00:17:11.970 IOPS MiB/s Average min max 00:17:11.970 Core 2: 65720.14 256.72 242.45 72.28 1880.35 00:17:11.970 Core 3: 68994.35 269.51 230.93 77.25 1893.98 00:17:11.970 ========================================================================== 00:17:11.970 Total : 134714.49 526.23 236.55 72.28 1893.98 00:17:11.970 00:17:11.970 Total operations: 673653, translate 0 pull_push 0 memzero 673653 00:17:11.970 17:33:38 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:17:11.970 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.970 [2024-04-18 17:33:38.256066] subsystem.c:1431:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:14.509 Initializing NVMe Controllers 00:17:14.509 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:17:14.509 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:17:14.509 Initialization complete. Launching workers. 00:17:14.509 ======================================================== 00:17:14.509 Latency(us) 00:17:14.509 Device Information : IOPS MiB/s Average min max 00:17:14.509 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.87 7971.78 5980.90 9983.68 00:17:14.509 ======================================================== 00:17:14.509 Total : 2016.00 7.87 7971.78 5980.90 9983.68 00:17:14.509 00:17:14.509 17:33:40 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:17:14.509 17:33:40 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:17:14.509 17:33:40 -- host/dma.sh@48 -- # local subsystem=0 00:17:14.509 17:33:40 -- host/dma.sh@50 -- # jq . 00:17:14.509 [2024-04-18 17:33:40.597799] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:17:14.509 [2024-04-18 17:33:40.597877] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2040699 ] 00:17:14.509 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.509 [2024-04-18 17:33:40.659053] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:14.509 [2024-04-18 17:33:40.772498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.509 [2024-04-18 17:33:40.772502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.509 [2024-04-18 17:33:41.039286] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:17:19.786 [2024-04-18 17:33:46.069888] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:17:20.044 bdev 192a8027-8da4-4a04-9e56-ec2d78523206 reports 1 memory domains 00:17:20.044 bdev 192a8027-8da4-4a04-9e56-ec2d78523206 supports RDMA memory domain 00:17:20.044 Initialization complete, running randrw IO for 5 sec on 2 cores 00:17:20.044 ========================================================================== 00:17:20.044 Latency [us] 00:17:20.044 IOPS MiB/s Average min max 00:17:20.044 Core 2: 16498.30 64.45 968.61 69.85 10428.46 00:17:20.044 Core 3: 16218.24 63.35 985.44 20.01 10208.95 00:17:20.044 ========================================================================== 00:17:20.044 Total : 32716.54 127.80 976.96 20.01 10428.46 00:17:20.044 00:17:20.044 Total operations: 163663, translate 163562 pull_push 0 memzero 101 00:17:20.044 17:33:46 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:17:20.044 17:33:46 -- host/dma.sh@120 -- # nvmftestfini 00:17:20.044 17:33:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:20.044 17:33:46 -- nvmf/common.sh@117 -- # sync 00:17:20.044 17:33:46 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:20.044 17:33:46 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:20.044 17:33:46 -- nvmf/common.sh@120 -- # set +e 00:17:20.044 17:33:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:20.044 17:33:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:20.044 rmmod nvme_rdma 00:17:20.044 rmmod nvme_fabrics 00:17:20.044 17:33:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:20.044 17:33:46 -- nvmf/common.sh@124 -- # set -e 00:17:20.044 17:33:46 -- nvmf/common.sh@125 -- # return 0 00:17:20.044 17:33:46 -- nvmf/common.sh@478 -- # '[' -n 2038307 ']' 00:17:20.044 17:33:46 -- nvmf/common.sh@479 -- # killprocess 2038307 00:17:20.044 17:33:46 -- common/autotest_common.sh@936 -- # '[' -z 2038307 ']' 00:17:20.044 17:33:46 -- common/autotest_common.sh@940 -- # kill -0 2038307 00:17:20.044 17:33:46 -- common/autotest_common.sh@941 -- # uname 00:17:20.044 17:33:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:20.044 17:33:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2038307 00:17:20.045 17:33:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:20.045 17:33:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:20.045 17:33:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2038307' 00:17:20.045 killing process with pid 2038307 00:17:20.045 17:33:46 -- common/autotest_common.sh@955 -- # kill 2038307 00:17:20.045 17:33:46 -- common/autotest_common.sh@960 -- # wait 2038307 00:17:20.614 17:33:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:20.614 17:33:46 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:20.614 00:17:20.614 real 0m28.622s 00:17:20.614 user 1m36.455s 00:17:20.614 sys 0m2.807s 00:17:20.614 17:33:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:20.614 17:33:46 -- common/autotest_common.sh@10 -- # set +x 00:17:20.614 ************************************ 00:17:20.614 END TEST dma 00:17:20.614 ************************************ 00:17:20.614 17:33:46 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:17:20.614 17:33:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:20.614 17:33:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:20.614 17:33:46 -- common/autotest_common.sh@10 -- # set +x 00:17:20.614 ************************************ 00:17:20.614 START TEST nvmf_identify 00:17:20.614 ************************************ 00:17:20.614 17:33:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:17:20.614 * Looking for test storage... 00:17:20.614 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:20.614 17:33:47 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:20.614 17:33:47 -- nvmf/common.sh@7 -- # uname -s 00:17:20.614 17:33:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.614 17:33:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.614 17:33:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.614 17:33:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.614 17:33:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.614 17:33:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.614 17:33:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.614 17:33:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.614 17:33:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.614 17:33:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.614 17:33:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:20.614 17:33:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:20.614 17:33:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.614 17:33:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.614 17:33:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:20.614 17:33:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.614 17:33:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:20.614 17:33:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.614 17:33:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.614 17:33:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.614 17:33:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.614 17:33:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.614 17:33:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.614 17:33:47 -- paths/export.sh@5 -- # export PATH 00:17:20.614 17:33:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.614 17:33:47 -- nvmf/common.sh@47 -- # : 0 00:17:20.614 17:33:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:20.614 17:33:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:20.614 17:33:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.614 17:33:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.614 17:33:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.614 17:33:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:20.614 17:33:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:20.614 17:33:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:20.614 17:33:47 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:20.614 17:33:47 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:20.614 17:33:47 -- host/identify.sh@14 -- # nvmftestinit 00:17:20.614 17:33:47 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:20.614 17:33:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.614 17:33:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:20.614 17:33:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:20.614 17:33:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:20.614 17:33:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.614 17:33:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.614 17:33:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.614 17:33:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:20.614 17:33:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:20.614 17:33:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:20.614 17:33:47 -- common/autotest_common.sh@10 -- # set +x 00:17:22.526 17:33:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:22.526 17:33:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:22.526 17:33:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:22.526 17:33:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:22.526 17:33:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:22.526 17:33:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:22.526 17:33:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:22.526 17:33:48 -- nvmf/common.sh@295 -- # net_devs=() 00:17:22.526 17:33:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:22.526 17:33:48 -- nvmf/common.sh@296 -- # e810=() 00:17:22.526 17:33:48 -- nvmf/common.sh@296 -- # local -ga e810 00:17:22.526 17:33:48 -- nvmf/common.sh@297 -- # x722=() 00:17:22.526 17:33:48 -- nvmf/common.sh@297 -- # local -ga x722 00:17:22.526 17:33:48 -- nvmf/common.sh@298 -- # mlx=() 00:17:22.526 17:33:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:22.526 17:33:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:22.526 17:33:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:22.526 17:33:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:22.526 17:33:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:22.526 17:33:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:22.526 17:33:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:22.526 17:33:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:22.526 17:33:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:22.526 17:33:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:22.526 17:33:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:22.526 17:33:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:22.526 17:33:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:22.526 17:33:48 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:22.526 17:33:48 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:22.526 17:33:48 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:22.526 17:33:48 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:22.526 17:33:48 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:22.526 17:33:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:22.526 17:33:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:22.526 17:33:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:17:22.526 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:17:22.526 17:33:48 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:22.526 17:33:48 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:22.526 17:33:48 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:17:22.526 17:33:48 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:22.526 17:33:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:22.526 17:33:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:17:22.526 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:17:22.526 17:33:48 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:22.526 17:33:48 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:22.526 17:33:48 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:17:22.526 17:33:48 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:22.526 17:33:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:22.526 17:33:48 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:22.526 17:33:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:22.526 17:33:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.526 17:33:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:22.526 17:33:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.526 17:33:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:17:22.526 Found net devices under 0000:09:00.0: mlx_0_0 00:17:22.526 17:33:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.526 17:33:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:22.526 17:33:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.526 17:33:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:22.526 17:33:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.526 17:33:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:17:22.526 Found net devices under 0000:09:00.1: mlx_0_1 00:17:22.526 17:33:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.526 17:33:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:22.526 17:33:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:22.526 17:33:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:22.526 17:33:48 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:17:22.526 17:33:48 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:17:22.526 17:33:48 -- nvmf/common.sh@409 -- # rdma_device_init 00:17:22.526 17:33:48 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:17:22.526 17:33:48 -- nvmf/common.sh@58 -- # uname 00:17:22.526 17:33:48 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:22.526 17:33:48 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:22.526 17:33:48 -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:22.526 17:33:48 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:22.526 17:33:48 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:22.526 17:33:48 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:22.526 17:33:48 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:22.526 17:33:48 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:22.526 17:33:48 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:17:22.526 17:33:48 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:22.526 17:33:48 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:22.526 17:33:48 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:22.526 17:33:48 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:22.526 17:33:48 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:22.526 17:33:48 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:22.526 17:33:48 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:22.526 17:33:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:22.526 17:33:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.526 17:33:48 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:22.526 17:33:48 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:22.526 17:33:48 -- nvmf/common.sh@105 -- # continue 2 00:17:22.526 17:33:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:22.526 17:33:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.526 17:33:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:22.526 17:33:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.526 17:33:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:22.526 17:33:48 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:22.526 17:33:48 -- nvmf/common.sh@105 -- # continue 2 00:17:22.526 17:33:48 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:22.526 17:33:48 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:22.526 17:33:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:22.526 17:33:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:22.526 17:33:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:22.526 17:33:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:22.526 17:33:48 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:22.526 17:33:48 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:22.526 17:33:48 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:22.526 82: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:22.526 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:17:22.526 altname enp9s0f0np0 00:17:22.526 inet 192.168.100.8/24 scope global mlx_0_0 00:17:22.526 valid_lft forever preferred_lft forever 00:17:22.526 17:33:48 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:22.526 17:33:48 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:22.526 17:33:48 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:22.527 17:33:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:22.527 17:33:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:22.527 17:33:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:22.527 17:33:48 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:22.527 17:33:48 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:22.527 17:33:48 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:22.527 83: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:22.527 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:17:22.527 altname enp9s0f1np1 00:17:22.527 inet 192.168.100.9/24 scope global mlx_0_1 00:17:22.527 valid_lft forever preferred_lft forever 00:17:22.527 17:33:48 -- nvmf/common.sh@411 -- # return 0 00:17:22.527 17:33:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:22.527 17:33:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:22.527 17:33:48 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:17:22.527 17:33:48 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:17:22.527 17:33:48 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:22.527 17:33:48 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:22.527 17:33:48 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:22.527 17:33:48 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:22.527 17:33:48 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:22.527 17:33:48 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:22.527 17:33:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:22.527 17:33:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.527 17:33:48 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:22.527 17:33:48 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:22.527 17:33:48 -- nvmf/common.sh@105 -- # continue 2 00:17:22.527 17:33:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:22.527 17:33:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.527 17:33:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:22.527 17:33:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:22.527 17:33:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:22.527 17:33:48 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:22.527 17:33:48 -- nvmf/common.sh@105 -- # continue 2 00:17:22.527 17:33:48 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:22.527 17:33:48 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:22.527 17:33:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:22.527 17:33:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:22.527 17:33:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:22.527 17:33:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:22.527 17:33:48 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:22.527 17:33:48 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:22.527 17:33:48 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:22.527 17:33:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:22.527 17:33:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:22.527 17:33:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:22.527 17:33:48 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:17:22.527 192.168.100.9' 00:17:22.527 17:33:48 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:22.527 192.168.100.9' 00:17:22.527 17:33:48 -- nvmf/common.sh@446 -- # head -n 1 00:17:22.527 17:33:48 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:22.527 17:33:48 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:17:22.527 192.168.100.9' 00:17:22.527 17:33:48 -- nvmf/common.sh@447 -- # tail -n +2 00:17:22.527 17:33:48 -- nvmf/common.sh@447 -- # head -n 1 00:17:22.527 17:33:48 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:22.527 17:33:48 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:17:22.527 17:33:48 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:22.527 17:33:48 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:17:22.527 17:33:48 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:17:22.527 17:33:48 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:17:22.527 17:33:48 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:22.527 17:33:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:22.527 17:33:48 -- common/autotest_common.sh@10 -- # set +x 00:17:22.527 17:33:48 -- host/identify.sh@19 -- # nvmfpid=2043038 00:17:22.527 17:33:48 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:22.527 17:33:48 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:22.527 17:33:48 -- host/identify.sh@23 -- # waitforlisten 2043038 00:17:22.527 17:33:48 -- common/autotest_common.sh@817 -- # '[' -z 2043038 ']' 00:17:22.527 17:33:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.527 17:33:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:22.527 17:33:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.527 17:33:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:22.527 17:33:48 -- common/autotest_common.sh@10 -- # set +x 00:17:22.527 [2024-04-18 17:33:49.026226] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:17:22.527 [2024-04-18 17:33:49.026311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.527 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.786 [2024-04-18 17:33:49.084434] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:22.786 [2024-04-18 17:33:49.189812] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.786 [2024-04-18 17:33:49.189864] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.786 [2024-04-18 17:33:49.189894] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.786 [2024-04-18 17:33:49.189909] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.786 [2024-04-18 17:33:49.189920] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.786 [2024-04-18 17:33:49.190060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.786 [2024-04-18 17:33:49.190120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.786 [2024-04-18 17:33:49.190185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:22.786 [2024-04-18 17:33:49.190188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.786 17:33:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:22.786 17:33:49 -- common/autotest_common.sh@850 -- # return 0 00:17:22.786 17:33:49 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:22.786 17:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.786 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:17:23.046 [2024-04-18 17:33:49.344219] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b09b60/0x1b0e050) succeed. 00:17:23.046 [2024-04-18 17:33:49.354932] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b0b150/0x1b4f6e0) succeed. 00:17:23.046 17:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.046 17:33:49 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:23.046 17:33:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:23.046 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:17:23.046 17:33:49 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:23.046 17:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.046 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:17:23.046 Malloc0 00:17:23.046 17:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.046 17:33:49 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:23.046 17:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.046 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:17:23.046 17:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.046 17:33:49 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:23.046 17:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.046 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:17:23.046 17:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.046 17:33:49 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:23.046 17:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.046 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:17:23.046 [2024-04-18 17:33:49.575039] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:23.046 17:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.046 17:33:49 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:23.046 17:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.046 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:17:23.311 17:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.311 17:33:49 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:23.311 17:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.311 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:17:23.311 [2024-04-18 17:33:49.590737] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:23.311 [ 00:17:23.311 { 00:17:23.311 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:23.311 "subtype": "Discovery", 00:17:23.311 "listen_addresses": [ 00:17:23.311 { 00:17:23.311 "transport": "RDMA", 00:17:23.311 "trtype": "RDMA", 00:17:23.311 "adrfam": "IPv4", 00:17:23.311 "traddr": "192.168.100.8", 00:17:23.311 "trsvcid": "4420" 00:17:23.311 } 00:17:23.311 ], 00:17:23.311 "allow_any_host": true, 00:17:23.311 "hosts": [] 00:17:23.311 }, 00:17:23.311 { 00:17:23.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:23.311 "subtype": "NVMe", 00:17:23.311 "listen_addresses": [ 00:17:23.311 { 00:17:23.311 "transport": "RDMA", 00:17:23.311 "trtype": "RDMA", 00:17:23.311 "adrfam": "IPv4", 00:17:23.311 "traddr": "192.168.100.8", 00:17:23.311 "trsvcid": "4420" 00:17:23.311 } 00:17:23.311 ], 00:17:23.311 "allow_any_host": true, 00:17:23.311 "hosts": [], 00:17:23.311 "serial_number": "SPDK00000000000001", 00:17:23.311 "model_number": "SPDK bdev Controller", 00:17:23.311 "max_namespaces": 32, 00:17:23.311 "min_cntlid": 1, 00:17:23.311 "max_cntlid": 65519, 00:17:23.311 "namespaces": [ 00:17:23.311 { 00:17:23.311 "nsid": 1, 00:17:23.311 "bdev_name": "Malloc0", 00:17:23.311 "name": "Malloc0", 00:17:23.311 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:23.311 "eui64": "ABCDEF0123456789", 00:17:23.311 "uuid": "812ba5b1-7153-4f09-b98d-d67429cdb834" 00:17:23.311 } 00:17:23.311 ] 00:17:23.311 } 00:17:23.311 ] 00:17:23.311 17:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.311 17:33:49 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:23.311 [2024-04-18 17:33:49.616842] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:17:23.311 [2024-04-18 17:33:49.616886] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2043183 ] 00:17:23.311 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.311 [2024-04-18 17:33:49.669392] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:23.311 [2024-04-18 17:33:49.669506] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:17:23.311 [2024-04-18 17:33:49.669531] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:17:23.311 [2024-04-18 17:33:49.669539] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:17:23.311 [2024-04-18 17:33:49.669589] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:23.311 [2024-04-18 17:33:49.679961] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:17:23.311 [2024-04-18 17:33:49.695909] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:23.311 [2024-04-18 17:33:49.695924] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:17:23.311 [2024-04-18 17:33:49.695937] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.695947] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.695955] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.695962] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.695970] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.695978] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.695986] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.695993] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696001] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696009] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696017] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696025] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696032] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696040] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696048] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696056] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696063] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696076] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696084] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696092] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696100] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696107] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696115] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696123] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696130] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696138] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696146] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696154] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696162] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696170] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696177] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x188700 00:17:23.311 [2024-04-18 17:33:49.696186] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:17:23.312 [2024-04-18 17:33:49.696194] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:23.312 [2024-04-18 17:33:49.696199] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:17:23.312 [2024-04-18 17:33:49.696237] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.696260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf200 len:0x400 key:0x188700 00:17:23.312 [2024-04-18 17:33:49.703391] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.312 [2024-04-18 17:33:49.703414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:23.312 [2024-04-18 17:33:49.703429] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.703441] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:23.312 [2024-04-18 17:33:49.703453] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:23.312 [2024-04-18 17:33:49.703463] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:23.312 [2024-04-18 17:33:49.703488] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.703502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.312 [2024-04-18 17:33:49.703531] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.312 [2024-04-18 17:33:49.703541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:17:23.312 [2024-04-18 17:33:49.703555] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:23.312 [2024-04-18 17:33:49.703564] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.703577] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:23.312 [2024-04-18 17:33:49.703590] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.703601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.312 [2024-04-18 17:33:49.703623] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.312 [2024-04-18 17:33:49.703632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:17:23.312 [2024-04-18 17:33:49.703643] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:23.312 [2024-04-18 17:33:49.703651] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.703662] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:23.312 [2024-04-18 17:33:49.703673] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.703684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.312 [2024-04-18 17:33:49.703714] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.312 [2024-04-18 17:33:49.703723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:23.312 [2024-04-18 17:33:49.703733] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:23.312 [2024-04-18 17:33:49.703741] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.703753] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.703764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.312 [2024-04-18 17:33:49.703782] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.312 [2024-04-18 17:33:49.703791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:23.312 [2024-04-18 17:33:49.703801] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:23.312 [2024-04-18 17:33:49.703809] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:23.312 [2024-04-18 17:33:49.703816] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.703825] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:23.312 [2024-04-18 17:33:49.703935] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:23.312 [2024-04-18 17:33:49.703943] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:23.312 [2024-04-18 17:33:49.703959] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.703970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.312 [2024-04-18 17:33:49.703996] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.312 [2024-04-18 17:33:49.704009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:23.312 [2024-04-18 17:33:49.704019] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:23.312 [2024-04-18 17:33:49.704027] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.704039] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.704050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.312 [2024-04-18 17:33:49.704067] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.312 [2024-04-18 17:33:49.704075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:23.312 [2024-04-18 17:33:49.704085] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:23.312 [2024-04-18 17:33:49.704092] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:23.312 [2024-04-18 17:33:49.704100] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.704109] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:23.312 [2024-04-18 17:33:49.704122] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:23.312 [2024-04-18 17:33:49.704139] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.704151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x188700 00:17:23.312 [2024-04-18 17:33:49.704206] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.312 [2024-04-18 17:33:49.704215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:23.312 [2024-04-18 17:33:49.704228] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:23.312 [2024-04-18 17:33:49.704237] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:23.312 [2024-04-18 17:33:49.704244] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:23.312 [2024-04-18 17:33:49.704257] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:23.312 [2024-04-18 17:33:49.704266] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:23.312 [2024-04-18 17:33:49.704273] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:23.312 [2024-04-18 17:33:49.704281] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.704292] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:23.312 [2024-04-18 17:33:49.704304] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.704316] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.312 [2024-04-18 17:33:49.704344] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.312 [2024-04-18 17:33:49.704356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:23.312 [2024-04-18 17:33:49.704395] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0500 length 0x40 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.704408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.312 [2024-04-18 17:33:49.704419] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0640 length 0x40 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.704428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.312 [2024-04-18 17:33:49.704438] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.704448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.312 [2024-04-18 17:33:49.704457] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.704466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.312 [2024-04-18 17:33:49.704475] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:23.312 [2024-04-18 17:33:49.704483] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x188700 00:17:23.312 [2024-04-18 17:33:49.704500] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:23.312 [2024-04-18 17:33:49.704513] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.313 [2024-04-18 17:33:49.704524] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.313 [2024-04-18 17:33:49.704547] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.313 [2024-04-18 17:33:49.704556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:17:23.313 [2024-04-18 17:33:49.704568] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:23.313 [2024-04-18 17:33:49.704577] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:23.313 [2024-04-18 17:33:49.704585] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x188700 00:17:23.313 [2024-04-18 17:33:49.704600] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.313 [2024-04-18 17:33:49.704613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x188700 00:17:23.313 [2024-04-18 17:33:49.704643] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.313 [2024-04-18 17:33:49.704653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:23.313 [2024-04-18 17:33:49.704679] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x188700 00:17:23.313 [2024-04-18 17:33:49.704694] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:23.313 [2024-04-18 17:33:49.704743] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.313 [2024-04-18 17:33:49.704757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x188700 00:17:23.313 [2024-04-18 17:33:49.704772] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x188700 00:17:23.313 [2024-04-18 17:33:49.704783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.313 [2024-04-18 17:33:49.704805] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.313 [2024-04-18 17:33:49.704814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:23.313 [2024-04-18 17:33:49.704833] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b40 length 0x40 lkey 0x188700 00:17:23.313 [2024-04-18 17:33:49.704845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x188700 00:17:23.313 [2024-04-18 17:33:49.704854] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x188700 00:17:23.313 [2024-04-18 17:33:49.704862] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.313 [2024-04-18 17:33:49.704870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:23.313 [2024-04-18 17:33:49.704878] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x188700 00:17:23.313 [2024-04-18 17:33:49.704886] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.313 [2024-04-18 17:33:49.704893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:23.313 [2024-04-18 17:33:49.704908] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x188700 00:17:23.313 [2024-04-18 17:33:49.704919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x188700 00:17:23.313 [2024-04-18 17:33:49.704928] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x188700 00:17:23.313 [2024-04-18 17:33:49.704955] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.313 [2024-04-18 17:33:49.704964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:23.313 [2024-04-18 17:33:49.704980] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x188700 00:17:23.313 ===================================================== 00:17:23.313 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:23.313 ===================================================== 00:17:23.313 Controller Capabilities/Features 00:17:23.313 ================================ 00:17:23.313 Vendor ID: 0000 00:17:23.313 Subsystem Vendor ID: 0000 00:17:23.313 Serial Number: .................... 00:17:23.313 Model Number: ........................................ 00:17:23.313 Firmware Version: 24.05 00:17:23.313 Recommended Arb Burst: 0 00:17:23.313 IEEE OUI Identifier: 00 00 00 00:17:23.313 Multi-path I/O 00:17:23.313 May have multiple subsystem ports: No 00:17:23.313 May have multiple controllers: No 00:17:23.313 Associated with SR-IOV VF: No 00:17:23.313 Max Data Transfer Size: 131072 00:17:23.313 Max Number of Namespaces: 0 00:17:23.313 Max Number of I/O Queues: 1024 00:17:23.313 NVMe Specification Version (VS): 1.3 00:17:23.313 NVMe Specification Version (Identify): 1.3 00:17:23.313 Maximum Queue Entries: 128 00:17:23.313 Contiguous Queues Required: Yes 00:17:23.313 Arbitration Mechanisms Supported 00:17:23.313 Weighted Round Robin: Not Supported 00:17:23.313 Vendor Specific: Not Supported 00:17:23.313 Reset Timeout: 15000 ms 00:17:23.313 Doorbell Stride: 4 bytes 00:17:23.313 NVM Subsystem Reset: Not Supported 00:17:23.313 Command Sets Supported 00:17:23.313 NVM Command Set: Supported 00:17:23.313 Boot Partition: Not Supported 00:17:23.313 Memory Page Size Minimum: 4096 bytes 00:17:23.313 Memory Page Size Maximum: 4096 bytes 00:17:23.313 Persistent Memory Region: Not Supported 00:17:23.313 Optional Asynchronous Events Supported 00:17:23.313 Namespace Attribute Notices: Not Supported 00:17:23.313 Firmware Activation Notices: Not Supported 00:17:23.313 ANA Change Notices: Not Supported 00:17:23.313 PLE Aggregate Log Change Notices: Not Supported 00:17:23.313 LBA Status Info Alert Notices: Not Supported 00:17:23.313 EGE Aggregate Log Change Notices: Not Supported 00:17:23.313 Normal NVM Subsystem Shutdown event: Not Supported 00:17:23.313 Zone Descriptor Change Notices: Not Supported 00:17:23.313 Discovery Log Change Notices: Supported 00:17:23.313 Controller Attributes 00:17:23.313 128-bit Host Identifier: Not Supported 00:17:23.313 Non-Operational Permissive Mode: Not Supported 00:17:23.313 NVM Sets: Not Supported 00:17:23.313 Read Recovery Levels: Not Supported 00:17:23.313 Endurance Groups: Not Supported 00:17:23.313 Predictable Latency Mode: Not Supported 00:17:23.313 Traffic Based Keep ALive: Not Supported 00:17:23.313 Namespace Granularity: Not Supported 00:17:23.313 SQ Associations: Not Supported 00:17:23.313 UUID List: Not Supported 00:17:23.313 Multi-Domain Subsystem: Not Supported 00:17:23.313 Fixed Capacity Management: Not Supported 00:17:23.313 Variable Capacity Management: Not Supported 00:17:23.313 Delete Endurance Group: Not Supported 00:17:23.313 Delete NVM Set: Not Supported 00:17:23.313 Extended LBA Formats Supported: Not Supported 00:17:23.313 Flexible Data Placement Supported: Not Supported 00:17:23.313 00:17:23.313 Controller Memory Buffer Support 00:17:23.313 ================================ 00:17:23.313 Supported: No 00:17:23.313 00:17:23.313 Persistent Memory Region Support 00:17:23.313 ================================ 00:17:23.313 Supported: No 00:17:23.313 00:17:23.313 Admin Command Set Attributes 00:17:23.313 ============================ 00:17:23.313 Security Send/Receive: Not Supported 00:17:23.313 Format NVM: Not Supported 00:17:23.313 Firmware Activate/Download: Not Supported 00:17:23.313 Namespace Management: Not Supported 00:17:23.313 Device Self-Test: Not Supported 00:17:23.313 Directives: Not Supported 00:17:23.313 NVMe-MI: Not Supported 00:17:23.313 Virtualization Management: Not Supported 00:17:23.313 Doorbell Buffer Config: Not Supported 00:17:23.314 Get LBA Status Capability: Not Supported 00:17:23.314 Command & Feature Lockdown Capability: Not Supported 00:17:23.314 Abort Command Limit: 1 00:17:23.314 Async Event Request Limit: 4 00:17:23.314 Number of Firmware Slots: N/A 00:17:23.314 Firmware Slot 1 Read-Only: N/A 00:17:23.314 Firmware Activation Without Reset: N/A 00:17:23.314 Multiple Update Detection Support: N/A 00:17:23.314 Firmware Update Granularity: No Information Provided 00:17:23.314 Per-Namespace SMART Log: No 00:17:23.314 Asymmetric Namespace Access Log Page: Not Supported 00:17:23.314 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:23.314 Command Effects Log Page: Not Supported 00:17:23.314 Get Log Page Extended Data: Supported 00:17:23.314 Telemetry Log Pages: Not Supported 00:17:23.314 Persistent Event Log Pages: Not Supported 00:17:23.314 Supported Log Pages Log Page: May Support 00:17:23.314 Commands Supported & Effects Log Page: Not Supported 00:17:23.314 Feature Identifiers & Effects Log Page:May Support 00:17:23.314 NVMe-MI Commands & Effects Log Page: May Support 00:17:23.314 Data Area 4 for Telemetry Log: Not Supported 00:17:23.314 Error Log Page Entries Supported: 128 00:17:23.314 Keep Alive: Not Supported 00:17:23.314 00:17:23.314 NVM Command Set Attributes 00:17:23.314 ========================== 00:17:23.314 Submission Queue Entry Size 00:17:23.314 Max: 1 00:17:23.314 Min: 1 00:17:23.314 Completion Queue Entry Size 00:17:23.314 Max: 1 00:17:23.314 Min: 1 00:17:23.314 Number of Namespaces: 0 00:17:23.314 Compare Command: Not Supported 00:17:23.314 Write Uncorrectable Command: Not Supported 00:17:23.314 Dataset Management Command: Not Supported 00:17:23.314 Write Zeroes Command: Not Supported 00:17:23.314 Set Features Save Field: Not Supported 00:17:23.314 Reservations: Not Supported 00:17:23.314 Timestamp: Not Supported 00:17:23.314 Copy: Not Supported 00:17:23.314 Volatile Write Cache: Not Present 00:17:23.314 Atomic Write Unit (Normal): 1 00:17:23.314 Atomic Write Unit (PFail): 1 00:17:23.314 Atomic Compare & Write Unit: 1 00:17:23.314 Fused Compare & Write: Supported 00:17:23.314 Scatter-Gather List 00:17:23.314 SGL Command Set: Supported 00:17:23.314 SGL Keyed: Supported 00:17:23.314 SGL Bit Bucket Descriptor: Not Supported 00:17:23.314 SGL Metadata Pointer: Not Supported 00:17:23.314 Oversized SGL: Not Supported 00:17:23.314 SGL Metadata Address: Not Supported 00:17:23.314 SGL Offset: Supported 00:17:23.314 Transport SGL Data Block: Not Supported 00:17:23.314 Replay Protected Memory Block: Not Supported 00:17:23.314 00:17:23.314 Firmware Slot Information 00:17:23.314 ========================= 00:17:23.314 Active slot: 0 00:17:23.314 00:17:23.314 00:17:23.314 Error Log 00:17:23.314 ========= 00:17:23.314 00:17:23.314 Active Namespaces 00:17:23.314 ================= 00:17:23.314 Discovery Log Page 00:17:23.314 ================== 00:17:23.314 Generation Counter: 2 00:17:23.314 Number of Records: 2 00:17:23.314 Record Format: 0 00:17:23.314 00:17:23.314 Discovery Log Entry 0 00:17:23.314 ---------------------- 00:17:23.314 Transport Type: 1 (RDMA) 00:17:23.314 Address Family: 1 (IPv4) 00:17:23.314 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:23.314 Entry Flags: 00:17:23.314 Duplicate Returned Information: 1 00:17:23.314 Explicit Persistent Connection Support for Discovery: 1 00:17:23.314 Transport Requirements: 00:17:23.314 Secure Channel: Not Required 00:17:23.314 Port ID: 0 (0x0000) 00:17:23.314 Controller ID: 65535 (0xffff) 00:17:23.314 Admin Max SQ Size: 128 00:17:23.314 Transport Service Identifier: 4420 00:17:23.314 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:23.314 Transport Address: 192.168.100.8 00:17:23.314 Transport Specific Address Subtype - RDMA 00:17:23.314 RDMA QP Service Type: 1 (Reliable Connected) 00:17:23.314 RDMA Provider Type: 1 (No provider specified) 00:17:23.314 RDMA CM Service: 1 (RDMA_CM) 00:17:23.314 Discovery Log Entry 1 00:17:23.314 ---------------------- 00:17:23.314 Transport Type: 1 (RDMA) 00:17:23.314 Address Family: 1 (IPv4) 00:17:23.314 Subsystem Type: 2 (NVM Subsystem) 00:17:23.314 Entry Flags: 00:17:23.314 Duplicate Returned Information: 0 00:17:23.314 Explicit Persistent Connection Support for Discovery: 0 00:17:23.314 Transport Requirements: 00:17:23.314 Secure Channel: Not Required 00:17:23.314 Port ID: 0 (0x0000) 00:17:23.314 Controller ID: 65535 (0xffff) 00:17:23.314 Admin Max SQ Size: [2024-04-18 17:33:49.705086] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:23.314 [2024-04-18 17:33:49.705106] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 57162 doesn't match qid 00:17:23.314 [2024-04-18 17:33:49.705123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32669 cdw0:5 sqhd:1790 p:0 m:0 dnr:0 00:17:23.314 [2024-04-18 17:33:49.705148] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 57162 doesn't match qid 00:17:23.314 [2024-04-18 17:33:49.705161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32669 cdw0:5 sqhd:1790 p:0 m:0 dnr:0 00:17:23.314 [2024-04-18 17:33:49.705170] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 57162 doesn't match qid 00:17:23.314 [2024-04-18 17:33:49.705181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32669 cdw0:5 sqhd:1790 p:0 m:0 dnr:0 00:17:23.314 [2024-04-18 17:33:49.705189] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 57162 doesn't match qid 00:17:23.314 [2024-04-18 17:33:49.705200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32669 cdw0:5 sqhd:1790 p:0 m:0 dnr:0 00:17:23.314 [2024-04-18 17:33:49.705214] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x188700 00:17:23.314 [2024-04-18 17:33:49.705230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.314 [2024-04-18 17:33:49.705249] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.314 [2024-04-18 17:33:49.705259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:17:23.314 [2024-04-18 17:33:49.705275] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.314 [2024-04-18 17:33:49.705288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.314 [2024-04-18 17:33:49.705297] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x188700 00:17:23.314 [2024-04-18 17:33:49.705316] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.314 [2024-04-18 17:33:49.705326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:23.314 [2024-04-18 17:33:49.705335] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:23.314 [2024-04-18 17:33:49.705344] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:23.314 [2024-04-18 17:33:49.705353] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x188700 00:17:23.314 [2024-04-18 17:33:49.705365] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.314 [2024-04-18 17:33:49.705377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.314 [2024-04-18 17:33:49.705404] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.314 [2024-04-18 17:33:49.705430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:17:23.314 [2024-04-18 17:33:49.705440] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x188700 00:17:23.314 [2024-04-18 17:33:49.705455] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.314 [2024-04-18 17:33:49.705468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.314 [2024-04-18 17:33:49.705487] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.314 [2024-04-18 17:33:49.705496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:17:23.314 [2024-04-18 17:33:49.705505] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x188700 00:17:23.314 [2024-04-18 17:33:49.705518] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.314 [2024-04-18 17:33:49.705537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.314 [2024-04-18 17:33:49.705561] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.314 [2024-04-18 17:33:49.705571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:17:23.314 [2024-04-18 17:33:49.705580] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x188700 00:17:23.314 [2024-04-18 17:33:49.705593] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.314 [2024-04-18 17:33:49.705605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.314 [2024-04-18 17:33:49.705625] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.314 [2024-04-18 17:33:49.705634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:17:23.314 [2024-04-18 17:33:49.705647] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x188700 00:17:23.314 [2024-04-18 17:33:49.705661] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.314 [2024-04-18 17:33:49.705674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.705697] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.705706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.705715] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.705728] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.705756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.705776] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.705785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.705793] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.705806] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.705818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.705837] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.705846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.705854] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.705867] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.705878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.705898] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.705907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.705915] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.705928] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.705939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.705961] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.705970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.705978] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.705991] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.706023] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.706051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.706060] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706073] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.706121] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.706131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.706139] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706152] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.706181] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.706190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.706199] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706211] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.706259] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.706268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.706277] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706290] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.706326] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.706336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.706344] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706357] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.706397] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.706408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.706417] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706430] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.706465] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.706479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.706488] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706501] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.706532] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.706542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.706551] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706564] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.706598] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.706608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.706616] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706629] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.706683] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.706693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.706701] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706714] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.706757] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.706766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.706774] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706786] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.706816] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.706825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.706834] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706846] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.315 [2024-04-18 17:33:49.706857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.315 [2024-04-18 17:33:49.706874] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.315 [2024-04-18 17:33:49.706883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:23.315 [2024-04-18 17:33:49.706892] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.706904] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.706915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.316 [2024-04-18 17:33:49.706932] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.316 [2024-04-18 17:33:49.706941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:17:23.316 [2024-04-18 17:33:49.706949] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.706962] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.706973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.316 [2024-04-18 17:33:49.706996] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.316 [2024-04-18 17:33:49.707005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:17:23.316 [2024-04-18 17:33:49.707013] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.707025] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.707036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.316 [2024-04-18 17:33:49.707057] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.316 [2024-04-18 17:33:49.707066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:17:23.316 [2024-04-18 17:33:49.707075] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.707087] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.707098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.316 [2024-04-18 17:33:49.707115] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.316 [2024-04-18 17:33:49.707124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:17:23.316 [2024-04-18 17:33:49.707132] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.707144] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.707155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.316 [2024-04-18 17:33:49.707172] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.316 [2024-04-18 17:33:49.707181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:17:23.316 [2024-04-18 17:33:49.707189] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.707201] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.707213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.316 [2024-04-18 17:33:49.707233] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.316 [2024-04-18 17:33:49.707243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:17:23.316 [2024-04-18 17:33:49.707251] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.707263] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.707275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.316 [2024-04-18 17:33:49.707290] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.316 [2024-04-18 17:33:49.707299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:17:23.316 [2024-04-18 17:33:49.707307] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.707319] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.707331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.316 [2024-04-18 17:33:49.707348] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.316 [2024-04-18 17:33:49.707356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:17:23.316 [2024-04-18 17:33:49.711388] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.711410] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.711424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.316 [2024-04-18 17:33:49.711465] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.316 [2024-04-18 17:33:49.711475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0010 p:0 m:0 dnr:0 00:17:23.316 [2024-04-18 17:33:49.711484] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x188700 00:17:23.316 [2024-04-18 17:33:49.711495] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:17:23.316 128 00:17:23.316 Transport Service Identifier: 4420 00:17:23.316 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:23.316 Transport Address: 192.168.100.8 00:17:23.316 Transport Specific Address Subtype - RDMA 00:17:23.316 RDMA QP Service Type: 1 (Reliable Connected) 00:17:23.316 RDMA Provider Type: 1 (No provider specified) 00:17:23.316 RDMA CM Service: 1 (RDMA_CM) 00:17:23.316 17:33:49 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:23.316 [2024-04-18 17:33:49.780785] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:17:23.316 [2024-04-18 17:33:49.780828] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2043185 ] 00:17:23.316 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.316 [2024-04-18 17:33:49.830028] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:23.316 [2024-04-18 17:33:49.830113] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:17:23.316 [2024-04-18 17:33:49.830135] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:17:23.316 [2024-04-18 17:33:49.830144] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:17:23.316 [2024-04-18 17:33:49.830176] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:23.316 [2024-04-18 17:33:49.842733] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:17:23.581 [2024-04-18 17:33:49.858900] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:23.581 [2024-04-18 17:33:49.858930] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:17:23.581 [2024-04-18 17:33:49.858941] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x188700 00:17:23.581 [2024-04-18 17:33:49.858951] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x188700 00:17:23.581 [2024-04-18 17:33:49.858960] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x188700 00:17:23.581 [2024-04-18 17:33:49.858969] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x188700 00:17:23.581 [2024-04-18 17:33:49.858978] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x188700 00:17:23.581 [2024-04-18 17:33:49.858987] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x188700 00:17:23.581 [2024-04-18 17:33:49.858996] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x188700 00:17:23.581 [2024-04-18 17:33:49.859004] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x188700 00:17:23.581 [2024-04-18 17:33:49.859013] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859022] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859031] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859039] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859048] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859056] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859065] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859074] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859083] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859091] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859100] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859108] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859132] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859142] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859151] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859160] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859173] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859183] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859191] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859200] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859209] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859218] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859226] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859235] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:17:23.582 [2024-04-18 17:33:49.859243] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:23.582 [2024-04-18 17:33:49.859249] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:17:23.582 [2024-04-18 17:33:49.859272] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.859289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf200 len:0x400 key:0x188700 00:17:23.582 [2024-04-18 17:33:49.866389] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.582 [2024-04-18 17:33:49.866406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:23.582 [2024-04-18 17:33:49.866416] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.866427] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:23.582 [2024-04-18 17:33:49.866437] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:23.582 [2024-04-18 17:33:49.866447] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:23.582 [2024-04-18 17:33:49.866464] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.866478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.582 [2024-04-18 17:33:49.866500] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.582 [2024-04-18 17:33:49.866510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:17:23.582 [2024-04-18 17:33:49.866525] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:23.582 [2024-04-18 17:33:49.866534] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.866545] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:23.582 [2024-04-18 17:33:49.866556] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.866568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.582 [2024-04-18 17:33:49.866589] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.582 [2024-04-18 17:33:49.866598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:17:23.582 [2024-04-18 17:33:49.866607] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:23.582 [2024-04-18 17:33:49.866616] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.866630] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:23.582 [2024-04-18 17:33:49.866642] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.866654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.582 [2024-04-18 17:33:49.866689] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.582 [2024-04-18 17:33:49.866699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:23.582 [2024-04-18 17:33:49.866708] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:23.582 [2024-04-18 17:33:49.866716] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.866728] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.866740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.582 [2024-04-18 17:33:49.866756] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.582 [2024-04-18 17:33:49.866764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:23.582 [2024-04-18 17:33:49.866773] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:23.582 [2024-04-18 17:33:49.866781] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:23.582 [2024-04-18 17:33:49.866788] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.866798] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:23.582 [2024-04-18 17:33:49.866907] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:23.582 [2024-04-18 17:33:49.866914] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:23.582 [2024-04-18 17:33:49.866925] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.866937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.582 [2024-04-18 17:33:49.866955] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.582 [2024-04-18 17:33:49.866964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:23.582 [2024-04-18 17:33:49.866973] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:23.582 [2024-04-18 17:33:49.866981] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.866993] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.867005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.582 [2024-04-18 17:33:49.867025] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.582 [2024-04-18 17:33:49.867033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:23.582 [2024-04-18 17:33:49.867045] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:23.582 [2024-04-18 17:33:49.867053] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:23.582 [2024-04-18 17:33:49.867061] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.867071] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:23.582 [2024-04-18 17:33:49.867083] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:23.582 [2024-04-18 17:33:49.867098] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.582 [2024-04-18 17:33:49.867110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x188700 00:17:23.582 [2024-04-18 17:33:49.867168] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.582 [2024-04-18 17:33:49.867177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:23.582 [2024-04-18 17:33:49.867188] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:23.582 [2024-04-18 17:33:49.867197] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:23.582 [2024-04-18 17:33:49.867204] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:23.583 [2024-04-18 17:33:49.867214] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:23.583 [2024-04-18 17:33:49.867223] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:23.583 [2024-04-18 17:33:49.867231] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.867238] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.867249] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.867260] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.867272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.583 [2024-04-18 17:33:49.867296] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.583 [2024-04-18 17:33:49.867305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:23.583 [2024-04-18 17:33:49.867316] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0500 length 0x40 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.867327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.583 [2024-04-18 17:33:49.867337] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0640 length 0x40 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.867347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.583 [2024-04-18 17:33:49.867356] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.867387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.583 [2024-04-18 17:33:49.867400] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.867414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.583 [2024-04-18 17:33:49.867424] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.867432] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.867449] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.867462] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.867474] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.583 [2024-04-18 17:33:49.867494] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.583 [2024-04-18 17:33:49.867503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:17:23.583 [2024-04-18 17:33:49.867512] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:23.583 [2024-04-18 17:33:49.867521] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.867529] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.867541] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.867553] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.867565] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.867576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.583 [2024-04-18 17:33:49.867606] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.583 [2024-04-18 17:33:49.867616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:17:23.583 [2024-04-18 17:33:49.867687] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.867698] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.867711] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.867742] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.867754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x188700 00:17:23.583 [2024-04-18 17:33:49.867783] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.583 [2024-04-18 17:33:49.867793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:23.583 [2024-04-18 17:33:49.867812] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:23.583 [2024-04-18 17:33:49.867833] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.867842] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.867859] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.867874] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.867885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x188700 00:17:23.583 [2024-04-18 17:33:49.867927] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.583 [2024-04-18 17:33:49.867937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:23.583 [2024-04-18 17:33:49.867961] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.867971] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.867984] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.867998] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.868009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x188700 00:17:23.583 [2024-04-18 17:33:49.868035] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.583 [2024-04-18 17:33:49.868044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:23.583 [2024-04-18 17:33:49.868059] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.868068] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.868078] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.868093] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.868105] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.868114] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.868124] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:23.583 [2024-04-18 17:33:49.868131] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:23.583 [2024-04-18 17:33:49.868140] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:23.583 [2024-04-18 17:33:49.868159] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.868171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.583 [2024-04-18 17:33:49.868182] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.868192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:23.583 [2024-04-18 17:33:49.868208] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.583 [2024-04-18 17:33:49.868221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:23.583 [2024-04-18 17:33:49.868232] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x188700 00:17:23.583 [2024-04-18 17:33:49.868241] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.583 [2024-04-18 17:33:49.868249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:23.583 [2024-04-18 17:33:49.868257] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x188700 00:17:23.584 [2024-04-18 17:33:49.868270] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x188700 00:17:23.584 [2024-04-18 17:33:49.868282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.584 [2024-04-18 17:33:49.868299] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.584 [2024-04-18 17:33:49.868308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:23.584 [2024-04-18 17:33:49.868317] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x188700 00:17:23.584 [2024-04-18 17:33:49.868329] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x188700 00:17:23.584 [2024-04-18 17:33:49.868341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.584 [2024-04-18 17:33:49.868379] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.584 [2024-04-18 17:33:49.868398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:23.584 [2024-04-18 17:33:49.868407] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x188700 00:17:23.584 [2024-04-18 17:33:49.868421] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x188700 00:17:23.584 [2024-04-18 17:33:49.868433] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.584 [2024-04-18 17:33:49.868454] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.584 [2024-04-18 17:33:49.868463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:17:23.584 [2024-04-18 17:33:49.868472] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x188700 00:17:23.584 [2024-04-18 17:33:49.868490] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x188700 00:17:23.584 [2024-04-18 17:33:49.868503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x188700 00:17:23.584 [2024-04-18 17:33:49.868517] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x188700 00:17:23.584 [2024-04-18 17:33:49.868528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x188700 00:17:23.584 [2024-04-18 17:33:49.868541] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b40 length 0x40 lkey 0x188700 00:17:23.584 [2024-04-18 17:33:49.868553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x188700 00:17:23.584 [2024-04-18 17:33:49.868566] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c80 length 0x40 lkey 0x188700 00:17:23.584 [2024-04-18 17:33:49.868581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x188700 00:17:23.584 [2024-04-18 17:33:49.868595] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.584 [2024-04-18 17:33:49.868604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:23.584 [2024-04-18 17:33:49.868624] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x188700 00:17:23.584 [2024-04-18 17:33:49.868634] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.584 [2024-04-18 17:33:49.868643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:23.584 [2024-04-18 17:33:49.868655] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x188700 00:17:23.584 [2024-04-18 17:33:49.868665] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.584 [2024-04-18 17:33:49.868673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:23.584 [2024-04-18 17:33:49.868698] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x188700 00:17:23.584 [2024-04-18 17:33:49.868708] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.584 [2024-04-18 17:33:49.868715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:23.584 [2024-04-18 17:33:49.868729] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x188700 00:17:23.584 ===================================================== 00:17:23.584 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:23.584 ===================================================== 00:17:23.584 Controller Capabilities/Features 00:17:23.584 ================================ 00:17:23.584 Vendor ID: 8086 00:17:23.584 Subsystem Vendor ID: 8086 00:17:23.584 Serial Number: SPDK00000000000001 00:17:23.584 Model Number: SPDK bdev Controller 00:17:23.584 Firmware Version: 24.05 00:17:23.584 Recommended Arb Burst: 6 00:17:23.584 IEEE OUI Identifier: e4 d2 5c 00:17:23.584 Multi-path I/O 00:17:23.584 May have multiple subsystem ports: Yes 00:17:23.584 May have multiple controllers: Yes 00:17:23.584 Associated with SR-IOV VF: No 00:17:23.584 Max Data Transfer Size: 131072 00:17:23.584 Max Number of Namespaces: 32 00:17:23.584 Max Number of I/O Queues: 127 00:17:23.584 NVMe Specification Version (VS): 1.3 00:17:23.584 NVMe Specification Version (Identify): 1.3 00:17:23.584 Maximum Queue Entries: 128 00:17:23.584 Contiguous Queues Required: Yes 00:17:23.584 Arbitration Mechanisms Supported 00:17:23.584 Weighted Round Robin: Not Supported 00:17:23.584 Vendor Specific: Not Supported 00:17:23.584 Reset Timeout: 15000 ms 00:17:23.584 Doorbell Stride: 4 bytes 00:17:23.584 NVM Subsystem Reset: Not Supported 00:17:23.584 Command Sets Supported 00:17:23.584 NVM Command Set: Supported 00:17:23.584 Boot Partition: Not Supported 00:17:23.584 Memory Page Size Minimum: 4096 bytes 00:17:23.584 Memory Page Size Maximum: 4096 bytes 00:17:23.584 Persistent Memory Region: Not Supported 00:17:23.584 Optional Asynchronous Events Supported 00:17:23.584 Namespace Attribute Notices: Supported 00:17:23.584 Firmware Activation Notices: Not Supported 00:17:23.584 ANA Change Notices: Not Supported 00:17:23.584 PLE Aggregate Log Change Notices: Not Supported 00:17:23.584 LBA Status Info Alert Notices: Not Supported 00:17:23.584 EGE Aggregate Log Change Notices: Not Supported 00:17:23.584 Normal NVM Subsystem Shutdown event: Not Supported 00:17:23.584 Zone Descriptor Change Notices: Not Supported 00:17:23.584 Discovery Log Change Notices: Not Supported 00:17:23.584 Controller Attributes 00:17:23.584 128-bit Host Identifier: Supported 00:17:23.584 Non-Operational Permissive Mode: Not Supported 00:17:23.584 NVM Sets: Not Supported 00:17:23.584 Read Recovery Levels: Not Supported 00:17:23.584 Endurance Groups: Not Supported 00:17:23.584 Predictable Latency Mode: Not Supported 00:17:23.584 Traffic Based Keep ALive: Not Supported 00:17:23.584 Namespace Granularity: Not Supported 00:17:23.584 SQ Associations: Not Supported 00:17:23.584 UUID List: Not Supported 00:17:23.584 Multi-Domain Subsystem: Not Supported 00:17:23.584 Fixed Capacity Management: Not Supported 00:17:23.584 Variable Capacity Management: Not Supported 00:17:23.584 Delete Endurance Group: Not Supported 00:17:23.584 Delete NVM Set: Not Supported 00:17:23.584 Extended LBA Formats Supported: Not Supported 00:17:23.584 Flexible Data Placement Supported: Not Supported 00:17:23.584 00:17:23.584 Controller Memory Buffer Support 00:17:23.584 ================================ 00:17:23.584 Supported: No 00:17:23.584 00:17:23.584 Persistent Memory Region Support 00:17:23.584 ================================ 00:17:23.584 Supported: No 00:17:23.584 00:17:23.584 Admin Command Set Attributes 00:17:23.584 ============================ 00:17:23.584 Security Send/Receive: Not Supported 00:17:23.584 Format NVM: Not Supported 00:17:23.584 Firmware Activate/Download: Not Supported 00:17:23.584 Namespace Management: Not Supported 00:17:23.584 Device Self-Test: Not Supported 00:17:23.584 Directives: Not Supported 00:17:23.584 NVMe-MI: Not Supported 00:17:23.584 Virtualization Management: Not Supported 00:17:23.584 Doorbell Buffer Config: Not Supported 00:17:23.584 Get LBA Status Capability: Not Supported 00:17:23.584 Command & Feature Lockdown Capability: Not Supported 00:17:23.584 Abort Command Limit: 4 00:17:23.584 Async Event Request Limit: 4 00:17:23.584 Number of Firmware Slots: N/A 00:17:23.584 Firmware Slot 1 Read-Only: N/A 00:17:23.584 Firmware Activation Without Reset: N/A 00:17:23.584 Multiple Update Detection Support: N/A 00:17:23.584 Firmware Update Granularity: No Information Provided 00:17:23.584 Per-Namespace SMART Log: No 00:17:23.584 Asymmetric Namespace Access Log Page: Not Supported 00:17:23.584 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:23.584 Command Effects Log Page: Supported 00:17:23.584 Get Log Page Extended Data: Supported 00:17:23.584 Telemetry Log Pages: Not Supported 00:17:23.584 Persistent Event Log Pages: Not Supported 00:17:23.584 Supported Log Pages Log Page: May Support 00:17:23.584 Commands Supported & Effects Log Page: Not Supported 00:17:23.584 Feature Identifiers & Effects Log Page:May Support 00:17:23.584 NVMe-MI Commands & Effects Log Page: May Support 00:17:23.584 Data Area 4 for Telemetry Log: Not Supported 00:17:23.584 Error Log Page Entries Supported: 128 00:17:23.584 Keep Alive: Supported 00:17:23.584 Keep Alive Granularity: 10000 ms 00:17:23.584 00:17:23.584 NVM Command Set Attributes 00:17:23.584 ========================== 00:17:23.584 Submission Queue Entry Size 00:17:23.584 Max: 64 00:17:23.584 Min: 64 00:17:23.584 Completion Queue Entry Size 00:17:23.584 Max: 16 00:17:23.585 Min: 16 00:17:23.585 Number of Namespaces: 32 00:17:23.585 Compare Command: Supported 00:17:23.585 Write Uncorrectable Command: Not Supported 00:17:23.585 Dataset Management Command: Supported 00:17:23.585 Write Zeroes Command: Supported 00:17:23.585 Set Features Save Field: Not Supported 00:17:23.585 Reservations: Supported 00:17:23.585 Timestamp: Not Supported 00:17:23.585 Copy: Supported 00:17:23.585 Volatile Write Cache: Present 00:17:23.585 Atomic Write Unit (Normal): 1 00:17:23.585 Atomic Write Unit (PFail): 1 00:17:23.585 Atomic Compare & Write Unit: 1 00:17:23.585 Fused Compare & Write: Supported 00:17:23.585 Scatter-Gather List 00:17:23.585 SGL Command Set: Supported 00:17:23.585 SGL Keyed: Supported 00:17:23.585 SGL Bit Bucket Descriptor: Not Supported 00:17:23.585 SGL Metadata Pointer: Not Supported 00:17:23.585 Oversized SGL: Not Supported 00:17:23.585 SGL Metadata Address: Not Supported 00:17:23.585 SGL Offset: Supported 00:17:23.585 Transport SGL Data Block: Not Supported 00:17:23.585 Replay Protected Memory Block: Not Supported 00:17:23.585 00:17:23.585 Firmware Slot Information 00:17:23.585 ========================= 00:17:23.585 Active slot: 1 00:17:23.585 Slot 1 Firmware Revision: 24.05 00:17:23.585 00:17:23.585 00:17:23.585 Commands Supported and Effects 00:17:23.585 ============================== 00:17:23.585 Admin Commands 00:17:23.585 -------------- 00:17:23.585 Get Log Page (02h): Supported 00:17:23.585 Identify (06h): Supported 00:17:23.585 Abort (08h): Supported 00:17:23.585 Set Features (09h): Supported 00:17:23.585 Get Features (0Ah): Supported 00:17:23.585 Asynchronous Event Request (0Ch): Supported 00:17:23.585 Keep Alive (18h): Supported 00:17:23.585 I/O Commands 00:17:23.585 ------------ 00:17:23.585 Flush (00h): Supported LBA-Change 00:17:23.585 Write (01h): Supported LBA-Change 00:17:23.585 Read (02h): Supported 00:17:23.585 Compare (05h): Supported 00:17:23.585 Write Zeroes (08h): Supported LBA-Change 00:17:23.585 Dataset Management (09h): Supported LBA-Change 00:17:23.585 Copy (19h): Supported LBA-Change 00:17:23.585 Unknown (79h): Supported LBA-Change 00:17:23.585 Unknown (7Ah): Supported 00:17:23.585 00:17:23.585 Error Log 00:17:23.585 ========= 00:17:23.585 00:17:23.585 Arbitration 00:17:23.585 =========== 00:17:23.585 Arbitration Burst: 1 00:17:23.585 00:17:23.585 Power Management 00:17:23.585 ================ 00:17:23.585 Number of Power States: 1 00:17:23.585 Current Power State: Power State #0 00:17:23.585 Power State #0: 00:17:23.585 Max Power: 0.00 W 00:17:23.585 Non-Operational State: Operational 00:17:23.585 Entry Latency: Not Reported 00:17:23.585 Exit Latency: Not Reported 00:17:23.585 Relative Read Throughput: 0 00:17:23.585 Relative Read Latency: 0 00:17:23.585 Relative Write Throughput: 0 00:17:23.585 Relative Write Latency: 0 00:17:23.585 Idle Power: Not Reported 00:17:23.585 Active Power: Not Reported 00:17:23.585 Non-Operational Permissive Mode: Not Supported 00:17:23.585 00:17:23.585 Health Information 00:17:23.585 ================== 00:17:23.585 Critical Warnings: 00:17:23.585 Available Spare Space: OK 00:17:23.585 Temperature: OK 00:17:23.585 Device Reliability: OK 00:17:23.585 Read Only: No 00:17:23.585 Volatile Memory Backup: OK 00:17:23.585 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:23.585 Temperature Threshold: [2024-04-18 17:33:49.868835] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c80 length 0x40 lkey 0x188700 00:17:23.585 [2024-04-18 17:33:49.868851] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.585 [2024-04-18 17:33:49.868875] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.585 [2024-04-18 17:33:49.868885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:23.585 [2024-04-18 17:33:49.868893] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x188700 00:17:23.585 [2024-04-18 17:33:49.868930] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:23.585 [2024-04-18 17:33:49.868946] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 42849 doesn't match qid 00:17:23.585 [2024-04-18 17:33:49.868964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32580 cdw0:5 sqhd:c790 p:0 m:0 dnr:0 00:17:23.585 [2024-04-18 17:33:49.868990] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 42849 doesn't match qid 00:17:23.585 [2024-04-18 17:33:49.869003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32580 cdw0:5 sqhd:c790 p:0 m:0 dnr:0 00:17:23.585 [2024-04-18 17:33:49.869012] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 42849 doesn't match qid 00:17:23.585 [2024-04-18 17:33:49.869024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32580 cdw0:5 sqhd:c790 p:0 m:0 dnr:0 00:17:23.585 [2024-04-18 17:33:49.869033] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 42849 doesn't match qid 00:17:23.585 [2024-04-18 17:33:49.869045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32580 cdw0:5 sqhd:c790 p:0 m:0 dnr:0 00:17:23.585 [2024-04-18 17:33:49.869058] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x188700 00:17:23.585 [2024-04-18 17:33:49.869071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.585 [2024-04-18 17:33:49.869093] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.585 [2024-04-18 17:33:49.869107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:17:23.585 [2024-04-18 17:33:49.869120] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.585 [2024-04-18 17:33:49.869132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.585 [2024-04-18 17:33:49.869141] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x188700 00:17:23.585 [2024-04-18 17:33:49.869158] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.585 [2024-04-18 17:33:49.869168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:23.585 [2024-04-18 17:33:49.869176] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:23.585 [2024-04-18 17:33:49.869184] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:23.585 [2024-04-18 17:33:49.869192] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x188700 00:17:23.585 [2024-04-18 17:33:49.869205] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.585 [2024-04-18 17:33:49.869218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.585 [2024-04-18 17:33:49.869238] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.585 [2024-04-18 17:33:49.869248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:23.585 [2024-04-18 17:33:49.869257] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x188700 00:17:23.585 [2024-04-18 17:33:49.869270] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.585 [2024-04-18 17:33:49.869282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.586 [2024-04-18 17:33:49.869298] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.586 [2024-04-18 17:33:49.869308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:23.586 [2024-04-18 17:33:49.869317] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869330] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.586 [2024-04-18 17:33:49.869364] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.586 [2024-04-18 17:33:49.869374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:23.586 [2024-04-18 17:33:49.869391] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869406] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.586 [2024-04-18 17:33:49.869439] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.586 [2024-04-18 17:33:49.869449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:23.586 [2024-04-18 17:33:49.869457] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869471] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.586 [2024-04-18 17:33:49.869509] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.586 [2024-04-18 17:33:49.869519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:23.586 [2024-04-18 17:33:49.869528] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869541] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.586 [2024-04-18 17:33:49.869572] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.586 [2024-04-18 17:33:49.869581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:17:23.586 [2024-04-18 17:33:49.869591] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869604] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.586 [2024-04-18 17:33:49.869638] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.586 [2024-04-18 17:33:49.869647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:23.586 [2024-04-18 17:33:49.869657] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869670] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.586 [2024-04-18 17:33:49.869717] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.586 [2024-04-18 17:33:49.869726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:17:23.586 [2024-04-18 17:33:49.869735] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869748] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.586 [2024-04-18 17:33:49.869782] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.586 [2024-04-18 17:33:49.869791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:17:23.586 [2024-04-18 17:33:49.869800] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869812] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.586 [2024-04-18 17:33:49.869842] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.586 [2024-04-18 17:33:49.869851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:17:23.586 [2024-04-18 17:33:49.869860] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869876] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.586 [2024-04-18 17:33:49.869908] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.586 [2024-04-18 17:33:49.869917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:17:23.586 [2024-04-18 17:33:49.869926] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869939] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.586 [2024-04-18 17:33:49.869966] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.586 [2024-04-18 17:33:49.869975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:17:23.586 [2024-04-18 17:33:49.869984] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.869997] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.870009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.586 [2024-04-18 17:33:49.870026] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.586 [2024-04-18 17:33:49.870035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:23.586 [2024-04-18 17:33:49.870044] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.870056] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.870068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.586 [2024-04-18 17:33:49.870089] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.586 [2024-04-18 17:33:49.870098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:17:23.586 [2024-04-18 17:33:49.870107] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.870120] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.586 [2024-04-18 17:33:49.870132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.586 [2024-04-18 17:33:49.870149] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.586 [2024-04-18 17:33:49.870158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:17:23.586 [2024-04-18 17:33:49.870167] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x188700 00:17:23.587 [2024-04-18 17:33:49.870180] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.587 [2024-04-18 17:33:49.870192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.587 [2024-04-18 17:33:49.870209] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.587 [2024-04-18 17:33:49.870218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:17:23.587 [2024-04-18 17:33:49.870232] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x188700 00:17:23.587 [2024-04-18 17:33:49.870246] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.587 [2024-04-18 17:33:49.870258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.587 [2024-04-18 17:33:49.870282] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.587 [2024-04-18 17:33:49.870291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:17:23.587 [2024-04-18 17:33:49.870299] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x188700 00:17:23.587 [2024-04-18 17:33:49.870312] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.587 [2024-04-18 17:33:49.870324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.587 [2024-04-18 17:33:49.870352] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.587 [2024-04-18 17:33:49.870375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:17:23.587 [2024-04-18 17:33:49.874398] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x188700 00:17:23.587 [2024-04-18 17:33:49.874418] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x188700 00:17:23.587 [2024-04-18 17:33:49.874432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:23.587 [2024-04-18 17:33:49.874456] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:23.587 [2024-04-18 17:33:49.874466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000d p:0 m:0 dnr:0 00:17:23.587 [2024-04-18 17:33:49.874476] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x188700 00:17:23.587 [2024-04-18 17:33:49.874486] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:17:23.587 0 Kelvin (-273 Celsius) 00:17:23.587 Available Spare: 0% 00:17:23.587 Available Spare Threshold: 0% 00:17:23.587 Life Percentage Used: 0% 00:17:23.587 Data Units Read: 0 00:17:23.587 Data Units Written: 0 00:17:23.587 Host Read Commands: 0 00:17:23.587 Host Write Commands: 0 00:17:23.587 Controller Busy Time: 0 minutes 00:17:23.587 Power Cycles: 0 00:17:23.587 Power On Hours: 0 hours 00:17:23.587 Unsafe Shutdowns: 0 00:17:23.587 Unrecoverable Media Errors: 0 00:17:23.587 Lifetime Error Log Entries: 0 00:17:23.587 Warning Temperature Time: 0 minutes 00:17:23.587 Critical Temperature Time: 0 minutes 00:17:23.587 00:17:23.587 Number of Queues 00:17:23.587 ================ 00:17:23.587 Number of I/O Submission Queues: 127 00:17:23.587 Number of I/O Completion Queues: 127 00:17:23.587 00:17:23.587 Active Namespaces 00:17:23.587 ================= 00:17:23.587 Namespace ID:1 00:17:23.587 Error Recovery Timeout: Unlimited 00:17:23.587 Command Set Identifier: NVM (00h) 00:17:23.587 Deallocate: Supported 00:17:23.587 Deallocated/Unwritten Error: Not Supported 00:17:23.587 Deallocated Read Value: Unknown 00:17:23.587 Deallocate in Write Zeroes: Not Supported 00:17:23.587 Deallocated Guard Field: 0xFFFF 00:17:23.587 Flush: Supported 00:17:23.587 Reservation: Supported 00:17:23.587 Namespace Sharing Capabilities: Multiple Controllers 00:17:23.587 Size (in LBAs): 131072 (0GiB) 00:17:23.587 Capacity (in LBAs): 131072 (0GiB) 00:17:23.587 Utilization (in LBAs): 131072 (0GiB) 00:17:23.587 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:23.587 EUI64: ABCDEF0123456789 00:17:23.587 UUID: 812ba5b1-7153-4f09-b98d-d67429cdb834 00:17:23.587 Thin Provisioning: Not Supported 00:17:23.587 Per-NS Atomic Units: Yes 00:17:23.587 Atomic Boundary Size (Normal): 0 00:17:23.587 Atomic Boundary Size (PFail): 0 00:17:23.587 Atomic Boundary Offset: 0 00:17:23.587 Maximum Single Source Range Length: 65535 00:17:23.587 Maximum Copy Length: 65535 00:17:23.587 Maximum Source Range Count: 1 00:17:23.587 NGUID/EUI64 Never Reused: No 00:17:23.587 Namespace Write Protected: No 00:17:23.587 Number of LBA Formats: 1 00:17:23.587 Current LBA Format: LBA Format #00 00:17:23.587 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:23.587 00:17:23.587 17:33:49 -- host/identify.sh@51 -- # sync 00:17:23.587 17:33:49 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.587 17:33:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.587 17:33:49 -- common/autotest_common.sh@10 -- # set +x 00:17:23.587 17:33:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.587 17:33:49 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:23.587 17:33:49 -- host/identify.sh@56 -- # nvmftestfini 00:17:23.587 17:33:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:23.587 17:33:49 -- nvmf/common.sh@117 -- # sync 00:17:23.587 17:33:49 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:23.587 17:33:49 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:23.587 17:33:49 -- nvmf/common.sh@120 -- # set +e 00:17:23.587 17:33:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.587 17:33:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:23.587 rmmod nvme_rdma 00:17:23.587 rmmod nvme_fabrics 00:17:23.587 17:33:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:23.587 17:33:49 -- nvmf/common.sh@124 -- # set -e 00:17:23.587 17:33:49 -- nvmf/common.sh@125 -- # return 0 00:17:23.587 17:33:49 -- nvmf/common.sh@478 -- # '[' -n 2043038 ']' 00:17:23.587 17:33:49 -- nvmf/common.sh@479 -- # killprocess 2043038 00:17:23.587 17:33:49 -- common/autotest_common.sh@936 -- # '[' -z 2043038 ']' 00:17:23.587 17:33:49 -- common/autotest_common.sh@940 -- # kill -0 2043038 00:17:23.587 17:33:49 -- common/autotest_common.sh@941 -- # uname 00:17:23.587 17:33:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:23.587 17:33:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2043038 00:17:23.587 17:33:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:23.587 17:33:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:23.587 17:33:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2043038' 00:17:23.587 killing process with pid 2043038 00:17:23.587 17:33:50 -- common/autotest_common.sh@955 -- # kill 2043038 00:17:23.587 [2024-04-18 17:33:50.008582] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:23.587 17:33:50 -- common/autotest_common.sh@960 -- # wait 2043038 00:17:24.156 17:33:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:24.156 17:33:50 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:24.156 00:17:24.156 real 0m3.384s 00:17:24.156 user 0m5.094s 00:17:24.156 sys 0m1.757s 00:17:24.156 17:33:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:24.156 17:33:50 -- common/autotest_common.sh@10 -- # set +x 00:17:24.156 ************************************ 00:17:24.156 END TEST nvmf_identify 00:17:24.156 ************************************ 00:17:24.156 17:33:50 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:17:24.156 17:33:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:24.156 17:33:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:24.156 17:33:50 -- common/autotest_common.sh@10 -- # set +x 00:17:24.156 ************************************ 00:17:24.156 START TEST nvmf_perf 00:17:24.156 ************************************ 00:17:24.156 17:33:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:17:24.156 * Looking for test storage... 00:17:24.156 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:24.156 17:33:50 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.156 17:33:50 -- nvmf/common.sh@7 -- # uname -s 00:17:24.157 17:33:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.157 17:33:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.157 17:33:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.157 17:33:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.157 17:33:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.157 17:33:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.157 17:33:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.157 17:33:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.157 17:33:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.157 17:33:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.157 17:33:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.157 17:33:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.157 17:33:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.157 17:33:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.157 17:33:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:24.157 17:33:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.157 17:33:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:24.157 17:33:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.157 17:33:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.157 17:33:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.157 17:33:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.157 17:33:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.157 17:33:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.157 17:33:50 -- paths/export.sh@5 -- # export PATH 00:17:24.157 17:33:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.157 17:33:50 -- nvmf/common.sh@47 -- # : 0 00:17:24.157 17:33:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:24.157 17:33:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:24.157 17:33:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.157 17:33:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.157 17:33:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.157 17:33:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:24.157 17:33:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:24.157 17:33:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:24.157 17:33:50 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:24.157 17:33:50 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:24.157 17:33:50 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:24.157 17:33:50 -- host/perf.sh@17 -- # nvmftestinit 00:17:24.157 17:33:50 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:24.157 17:33:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.157 17:33:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:24.157 17:33:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:24.157 17:33:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:24.157 17:33:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.157 17:33:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.157 17:33:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.157 17:33:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:24.157 17:33:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:24.157 17:33:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:24.157 17:33:50 -- common/autotest_common.sh@10 -- # set +x 00:17:26.059 17:33:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:26.059 17:33:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:26.059 17:33:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:26.059 17:33:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:26.059 17:33:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:26.059 17:33:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:26.059 17:33:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:26.059 17:33:52 -- nvmf/common.sh@295 -- # net_devs=() 00:17:26.059 17:33:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:26.059 17:33:52 -- nvmf/common.sh@296 -- # e810=() 00:17:26.059 17:33:52 -- nvmf/common.sh@296 -- # local -ga e810 00:17:26.059 17:33:52 -- nvmf/common.sh@297 -- # x722=() 00:17:26.059 17:33:52 -- nvmf/common.sh@297 -- # local -ga x722 00:17:26.059 17:33:52 -- nvmf/common.sh@298 -- # mlx=() 00:17:26.059 17:33:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:26.059 17:33:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.059 17:33:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.059 17:33:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.059 17:33:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.059 17:33:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.059 17:33:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.059 17:33:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.059 17:33:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.059 17:33:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.059 17:33:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.059 17:33:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.059 17:33:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:26.059 17:33:52 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:26.059 17:33:52 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:26.059 17:33:52 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:26.059 17:33:52 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:26.059 17:33:52 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:26.059 17:33:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:26.059 17:33:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.059 17:33:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:17:26.059 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:17:26.059 17:33:52 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:26.059 17:33:52 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:26.059 17:33:52 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:17:26.059 17:33:52 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:26.059 17:33:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:26.059 17:33:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:17:26.059 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:17:26.059 17:33:52 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:26.059 17:33:52 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:26.059 17:33:52 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:17:26.059 17:33:52 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:26.059 17:33:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:26.059 17:33:52 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:26.059 17:33:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.059 17:33:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.059 17:33:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:26.059 17:33:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.059 17:33:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:17:26.059 Found net devices under 0000:09:00.0: mlx_0_0 00:17:26.059 17:33:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.059 17:33:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:26.059 17:33:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.059 17:33:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:26.059 17:33:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.059 17:33:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:17:26.059 Found net devices under 0000:09:00.1: mlx_0_1 00:17:26.059 17:33:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.059 17:33:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:26.059 17:33:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:26.059 17:33:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:26.059 17:33:52 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:17:26.059 17:33:52 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:17:26.059 17:33:52 -- nvmf/common.sh@409 -- # rdma_device_init 00:17:26.059 17:33:52 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:17:26.059 17:33:52 -- nvmf/common.sh@58 -- # uname 00:17:26.059 17:33:52 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:26.059 17:33:52 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:26.059 17:33:52 -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:26.059 17:33:52 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:26.059 17:33:52 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:26.059 17:33:52 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:26.059 17:33:52 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:26.059 17:33:52 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:26.059 17:33:52 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:17:26.059 17:33:52 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:26.059 17:33:52 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:26.059 17:33:52 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:26.059 17:33:52 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:26.059 17:33:52 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:26.059 17:33:52 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:26.059 17:33:52 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:26.059 17:33:52 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:26.059 17:33:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:26.059 17:33:52 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:26.059 17:33:52 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:26.059 17:33:52 -- nvmf/common.sh@105 -- # continue 2 00:17:26.059 17:33:52 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:26.059 17:33:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:26.059 17:33:52 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:26.059 17:33:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:26.059 17:33:52 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:26.059 17:33:52 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:26.059 17:33:52 -- nvmf/common.sh@105 -- # continue 2 00:17:26.059 17:33:52 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:26.059 17:33:52 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:26.059 17:33:52 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:26.059 17:33:52 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:26.059 17:33:52 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:26.059 17:33:52 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:26.059 17:33:52 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:26.059 17:33:52 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:26.059 17:33:52 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:26.059 82: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:26.059 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:17:26.059 altname enp9s0f0np0 00:17:26.059 inet 192.168.100.8/24 scope global mlx_0_0 00:17:26.059 valid_lft forever preferred_lft forever 00:17:26.059 17:33:52 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:26.059 17:33:52 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:26.060 17:33:52 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:26.060 17:33:52 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:26.060 17:33:52 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:26.060 17:33:52 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:26.060 17:33:52 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:26.060 17:33:52 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:26.060 17:33:52 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:26.060 83: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:26.060 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:17:26.060 altname enp9s0f1np1 00:17:26.060 inet 192.168.100.9/24 scope global mlx_0_1 00:17:26.060 valid_lft forever preferred_lft forever 00:17:26.060 17:33:52 -- nvmf/common.sh@411 -- # return 0 00:17:26.060 17:33:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:26.060 17:33:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:26.060 17:33:52 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:17:26.060 17:33:52 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:17:26.060 17:33:52 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:26.060 17:33:52 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:26.060 17:33:52 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:26.060 17:33:52 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:26.060 17:33:52 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:26.060 17:33:52 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:26.060 17:33:52 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:26.060 17:33:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:26.060 17:33:52 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:26.060 17:33:52 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:26.060 17:33:52 -- nvmf/common.sh@105 -- # continue 2 00:17:26.060 17:33:52 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:26.060 17:33:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:26.060 17:33:52 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:26.060 17:33:52 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:26.060 17:33:52 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:26.060 17:33:52 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:26.060 17:33:52 -- nvmf/common.sh@105 -- # continue 2 00:17:26.060 17:33:52 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:26.060 17:33:52 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:26.060 17:33:52 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:26.060 17:33:52 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:26.060 17:33:52 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:26.060 17:33:52 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:26.060 17:33:52 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:26.060 17:33:52 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:26.060 17:33:52 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:26.060 17:33:52 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:26.060 17:33:52 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:26.060 17:33:52 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:26.060 17:33:52 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:17:26.060 192.168.100.9' 00:17:26.060 17:33:52 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:26.060 192.168.100.9' 00:17:26.060 17:33:52 -- nvmf/common.sh@446 -- # head -n 1 00:17:26.060 17:33:52 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:26.060 17:33:52 -- nvmf/common.sh@447 -- # tail -n +2 00:17:26.060 17:33:52 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:17:26.060 192.168.100.9' 00:17:26.060 17:33:52 -- nvmf/common.sh@447 -- # head -n 1 00:17:26.060 17:33:52 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:26.060 17:33:52 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:17:26.060 17:33:52 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:26.060 17:33:52 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:17:26.060 17:33:52 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:17:26.060 17:33:52 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:17:26.060 17:33:52 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:26.060 17:33:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:26.060 17:33:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:26.060 17:33:52 -- common/autotest_common.sh@10 -- # set +x 00:17:26.060 17:33:52 -- nvmf/common.sh@470 -- # nvmfpid=2044864 00:17:26.060 17:33:52 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:26.060 17:33:52 -- nvmf/common.sh@471 -- # waitforlisten 2044864 00:17:26.060 17:33:52 -- common/autotest_common.sh@817 -- # '[' -z 2044864 ']' 00:17:26.060 17:33:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.060 17:33:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:26.060 17:33:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.060 17:33:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:26.060 17:33:52 -- common/autotest_common.sh@10 -- # set +x 00:17:26.060 [2024-04-18 17:33:52.570207] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:17:26.060 [2024-04-18 17:33:52.570289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.060 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.319 [2024-04-18 17:33:52.630618] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:26.319 [2024-04-18 17:33:52.749084] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.319 [2024-04-18 17:33:52.749142] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.319 [2024-04-18 17:33:52.749158] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.319 [2024-04-18 17:33:52.749172] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.319 [2024-04-18 17:33:52.749184] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.319 [2024-04-18 17:33:52.749268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.319 [2024-04-18 17:33:52.749337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.319 [2024-04-18 17:33:52.749430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:26.319 [2024-04-18 17:33:52.749434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.578 17:33:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:26.578 17:33:52 -- common/autotest_common.sh@850 -- # return 0 00:17:26.578 17:33:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:26.578 17:33:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:26.578 17:33:52 -- common/autotest_common.sh@10 -- # set +x 00:17:26.578 17:33:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.578 17:33:52 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:26.579 17:33:52 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:17:29.867 17:33:56 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:17:29.867 17:33:56 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:29.867 17:33:56 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:17:29.867 17:33:56 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:30.124 17:33:56 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:30.124 17:33:56 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:17:30.124 17:33:56 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:30.124 17:33:56 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:17:30.124 17:33:56 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:17:30.382 [2024-04-18 17:33:56.774922] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:17:30.382 [2024-04-18 17:33:56.796688] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x90cf10/0xa3ae00) succeed. 00:17:30.382 [2024-04-18 17:33:56.807571] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x90e500/0x91ac80) succeed. 00:17:30.639 17:33:56 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:30.898 17:33:57 -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:30.898 17:33:57 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:30.898 17:33:57 -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:30.898 17:33:57 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:31.202 17:33:57 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:31.481 [2024-04-18 17:33:57.888757] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:31.481 17:33:57 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:31.739 17:33:58 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:17:31.739 17:33:58 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:17:31.739 17:33:58 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:31.739 17:33:58 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:17:33.115 Initializing NVMe Controllers 00:17:33.115 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:17:33.115 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:17:33.115 Initialization complete. Launching workers. 00:17:33.116 ======================================================== 00:17:33.116 Latency(us) 00:17:33.116 Device Information : IOPS MiB/s Average min max 00:17:33.116 PCIE (0000:88:00.0) NSID 1 from core 0: 85371.43 333.48 374.37 42.68 5546.52 00:17:33.116 ======================================================== 00:17:33.116 Total : 85371.43 333.48 374.37 42.68 5546.52 00:17:33.116 00:17:33.116 17:33:59 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:17:33.116 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.404 Initializing NVMe Controllers 00:17:36.404 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:36.404 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:36.404 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:36.404 Initialization complete. Launching workers. 00:17:36.404 ======================================================== 00:17:36.404 Latency(us) 00:17:36.404 Device Information : IOPS MiB/s Average min max 00:17:36.404 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5717.98 22.34 174.63 64.45 4116.61 00:17:36.404 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4648.99 18.16 213.97 83.38 4139.22 00:17:36.404 ======================================================== 00:17:36.404 Total : 10366.97 40.50 192.27 64.45 4139.22 00:17:36.404 00:17:36.404 17:34:02 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:17:36.404 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.689 Initializing NVMe Controllers 00:17:39.689 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:39.690 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:39.690 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:39.690 Initialization complete. Launching workers. 00:17:39.690 ======================================================== 00:17:39.690 Latency(us) 00:17:39.690 Device Information : IOPS MiB/s Average min max 00:17:39.690 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14789.85 57.77 2163.32 593.74 6739.99 00:17:39.690 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4001.89 15.63 7995.19 5642.06 15134.13 00:17:39.690 ======================================================== 00:17:39.690 Total : 18791.74 73.41 3405.28 593.74 15134.13 00:17:39.690 00:17:39.690 17:34:06 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:17:39.690 17:34:06 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:17:39.690 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.963 Initializing NVMe Controllers 00:17:44.963 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:44.963 Controller IO queue size 128, less than required. 00:17:44.963 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:44.963 Controller IO queue size 128, less than required. 00:17:44.963 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:44.963 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:44.963 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:44.963 Initialization complete. Launching workers. 00:17:44.963 ======================================================== 00:17:44.963 Latency(us) 00:17:44.963 Device Information : IOPS MiB/s Average min max 00:17:44.963 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3007.68 751.92 42841.38 14099.62 101306.23 00:17:44.963 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3223.15 805.79 39157.62 15208.91 63650.89 00:17:44.963 ======================================================== 00:17:44.963 Total : 6230.83 1557.71 40935.80 14099.62 101306.23 00:17:44.963 00:17:44.963 17:34:10 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:17:44.963 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.963 No valid NVMe controllers or AIO or URING devices found 00:17:44.963 Initializing NVMe Controllers 00:17:44.963 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:44.963 Controller IO queue size 128, less than required. 00:17:44.963 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:44.963 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:44.963 Controller IO queue size 128, less than required. 00:17:44.963 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:44.963 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:17:44.963 WARNING: Some requested NVMe devices were skipped 00:17:44.963 17:34:10 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:17:44.963 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.152 Initializing NVMe Controllers 00:17:49.152 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:49.152 Controller IO queue size 128, less than required. 00:17:49.152 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.152 Controller IO queue size 128, less than required. 00:17:49.152 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.152 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:49.152 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:49.152 Initialization complete. Launching workers. 00:17:49.152 00:17:49.152 ==================== 00:17:49.152 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:49.152 RDMA transport: 00:17:49.152 dev name: mlx5_0 00:17:49.152 polls: 326466 00:17:49.152 idle_polls: 323884 00:17:49.152 completions: 34726 00:17:49.152 queued_requests: 1 00:17:49.152 total_send_wrs: 17363 00:17:49.152 send_doorbell_updates: 2354 00:17:49.152 total_recv_wrs: 17490 00:17:49.152 recv_doorbell_updates: 2358 00:17:49.152 --------------------------------- 00:17:49.152 00:17:49.152 ==================== 00:17:49.152 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:49.152 RDMA transport: 00:17:49.152 dev name: mlx5_0 00:17:49.152 polls: 328447 00:17:49.152 idle_polls: 328192 00:17:49.152 completions: 17714 00:17:49.152 queued_requests: 1 00:17:49.152 total_send_wrs: 8857 00:17:49.152 send_doorbell_updates: 246 00:17:49.152 total_recv_wrs: 8984 00:17:49.152 recv_doorbell_updates: 247 00:17:49.152 --------------------------------- 00:17:49.152 ======================================================== 00:17:49.152 Latency(us) 00:17:49.152 Device Information : IOPS MiB/s Average min max 00:17:49.152 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4340.30 1085.08 29493.04 14850.37 74417.80 00:17:49.152 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2213.90 553.48 57508.53 31426.72 97323.03 00:17:49.152 ======================================================== 00:17:49.152 Total : 6554.21 1638.55 38956.20 14850.37 97323.03 00:17:49.152 00:17:49.152 17:34:15 -- host/perf.sh@66 -- # sync 00:17:49.152 17:34:15 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:49.152 17:34:15 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:49.152 17:34:15 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:49.152 17:34:15 -- host/perf.sh@114 -- # nvmftestfini 00:17:49.152 17:34:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:49.152 17:34:15 -- nvmf/common.sh@117 -- # sync 00:17:49.152 17:34:15 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:49.152 17:34:15 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:49.152 17:34:15 -- nvmf/common.sh@120 -- # set +e 00:17:49.152 17:34:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:49.152 17:34:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:49.152 rmmod nvme_rdma 00:17:49.152 rmmod nvme_fabrics 00:17:49.152 17:34:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:49.152 17:34:15 -- nvmf/common.sh@124 -- # set -e 00:17:49.152 17:34:15 -- nvmf/common.sh@125 -- # return 0 00:17:49.152 17:34:15 -- nvmf/common.sh@478 -- # '[' -n 2044864 ']' 00:17:49.152 17:34:15 -- nvmf/common.sh@479 -- # killprocess 2044864 00:17:49.152 17:34:15 -- common/autotest_common.sh@936 -- # '[' -z 2044864 ']' 00:17:49.152 17:34:15 -- common/autotest_common.sh@940 -- # kill -0 2044864 00:17:49.152 17:34:15 -- common/autotest_common.sh@941 -- # uname 00:17:49.152 17:34:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.152 17:34:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2044864 00:17:49.152 17:34:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:49.152 17:34:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:49.152 17:34:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2044864' 00:17:49.152 killing process with pid 2044864 00:17:49.152 17:34:15 -- common/autotest_common.sh@955 -- # kill 2044864 00:17:49.152 17:34:15 -- common/autotest_common.sh@960 -- # wait 2044864 00:17:51.057 17:34:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:51.057 17:34:17 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:51.057 00:17:51.057 real 0m26.793s 00:17:51.057 user 1m39.448s 00:17:51.057 sys 0m2.471s 00:17:51.057 17:34:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:51.057 17:34:17 -- common/autotest_common.sh@10 -- # set +x 00:17:51.057 ************************************ 00:17:51.057 END TEST nvmf_perf 00:17:51.057 ************************************ 00:17:51.057 17:34:17 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:17:51.057 17:34:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:51.057 17:34:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:51.057 17:34:17 -- common/autotest_common.sh@10 -- # set +x 00:17:51.057 ************************************ 00:17:51.057 START TEST nvmf_fio_host 00:17:51.057 ************************************ 00:17:51.057 17:34:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:17:51.057 * Looking for test storage... 00:17:51.057 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:51.057 17:34:17 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:51.057 17:34:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.057 17:34:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.057 17:34:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.057 17:34:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.057 17:34:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.057 17:34:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.057 17:34:17 -- paths/export.sh@5 -- # export PATH 00:17:51.058 17:34:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.058 17:34:17 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.058 17:34:17 -- nvmf/common.sh@7 -- # uname -s 00:17:51.058 17:34:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.058 17:34:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.058 17:34:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.058 17:34:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.058 17:34:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.058 17:34:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.058 17:34:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.058 17:34:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.058 17:34:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.058 17:34:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.058 17:34:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:51.058 17:34:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:51.058 17:34:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.058 17:34:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.058 17:34:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:51.058 17:34:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.058 17:34:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:51.058 17:34:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.058 17:34:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.058 17:34:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.058 17:34:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.058 17:34:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.058 17:34:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.058 17:34:17 -- paths/export.sh@5 -- # export PATH 00:17:51.058 17:34:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.058 17:34:17 -- nvmf/common.sh@47 -- # : 0 00:17:51.058 17:34:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:51.058 17:34:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:51.058 17:34:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.058 17:34:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.058 17:34:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.058 17:34:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:51.058 17:34:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:51.058 17:34:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:51.058 17:34:17 -- host/fio.sh@12 -- # nvmftestinit 00:17:51.058 17:34:17 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:51.058 17:34:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.058 17:34:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:51.058 17:34:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:51.058 17:34:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:51.058 17:34:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.058 17:34:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.058 17:34:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.058 17:34:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:51.058 17:34:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:51.058 17:34:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:51.058 17:34:17 -- common/autotest_common.sh@10 -- # set +x 00:17:52.967 17:34:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:52.967 17:34:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:52.967 17:34:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:52.967 17:34:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:52.967 17:34:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:52.967 17:34:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:52.967 17:34:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:52.967 17:34:19 -- nvmf/common.sh@295 -- # net_devs=() 00:17:52.967 17:34:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:52.967 17:34:19 -- nvmf/common.sh@296 -- # e810=() 00:17:52.967 17:34:19 -- nvmf/common.sh@296 -- # local -ga e810 00:17:52.967 17:34:19 -- nvmf/common.sh@297 -- # x722=() 00:17:52.967 17:34:19 -- nvmf/common.sh@297 -- # local -ga x722 00:17:52.967 17:34:19 -- nvmf/common.sh@298 -- # mlx=() 00:17:52.967 17:34:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:52.967 17:34:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:52.967 17:34:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:52.967 17:34:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:52.967 17:34:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:52.967 17:34:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:52.967 17:34:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:52.967 17:34:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:52.967 17:34:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:52.967 17:34:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:52.967 17:34:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:52.967 17:34:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:52.967 17:34:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:52.967 17:34:19 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:52.967 17:34:19 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:52.967 17:34:19 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:52.967 17:34:19 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:17:52.967 17:34:19 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:17:52.967 17:34:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:52.967 17:34:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:52.967 17:34:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:17:52.967 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:17:52.967 17:34:19 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:52.967 17:34:19 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:52.967 17:34:19 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:17:52.967 17:34:19 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:52.967 17:34:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:52.967 17:34:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:17:52.967 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:17:52.967 17:34:19 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:17:52.967 17:34:19 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:17:52.967 17:34:19 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:17:52.967 17:34:19 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:52.967 17:34:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:52.967 17:34:19 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:17:52.967 17:34:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:52.967 17:34:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.967 17:34:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:52.967 17:34:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.967 17:34:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:17:52.967 Found net devices under 0000:09:00.0: mlx_0_0 00:17:52.967 17:34:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.967 17:34:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:52.967 17:34:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.967 17:34:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:52.967 17:34:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.967 17:34:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:17:52.967 Found net devices under 0000:09:00.1: mlx_0_1 00:17:52.967 17:34:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.967 17:34:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:52.967 17:34:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:52.967 17:34:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:52.967 17:34:19 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:17:52.967 17:34:19 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:17:52.967 17:34:19 -- nvmf/common.sh@409 -- # rdma_device_init 00:17:52.967 17:34:19 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:17:52.967 17:34:19 -- nvmf/common.sh@58 -- # uname 00:17:52.967 17:34:19 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:52.967 17:34:19 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:52.967 17:34:19 -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:52.967 17:34:19 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:52.967 17:34:19 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:52.967 17:34:19 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:52.967 17:34:19 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:52.967 17:34:19 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:52.967 17:34:19 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:17:52.967 17:34:19 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:52.967 17:34:19 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:52.967 17:34:19 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:52.967 17:34:19 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:52.967 17:34:19 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:52.968 17:34:19 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:52.968 17:34:19 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:52.968 17:34:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:52.968 17:34:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:52.968 17:34:19 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:52.968 17:34:19 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:52.968 17:34:19 -- nvmf/common.sh@105 -- # continue 2 00:17:52.968 17:34:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:52.968 17:34:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:52.968 17:34:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:52.968 17:34:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:52.968 17:34:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:52.968 17:34:19 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:52.968 17:34:19 -- nvmf/common.sh@105 -- # continue 2 00:17:52.968 17:34:19 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:52.968 17:34:19 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:17:52.968 17:34:19 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:52.968 17:34:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:52.968 17:34:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:52.968 17:34:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:52.968 17:34:19 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:52.968 17:34:19 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:52.968 17:34:19 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:17:52.968 82: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:52.968 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:17:52.968 altname enp9s0f0np0 00:17:52.968 inet 192.168.100.8/24 scope global mlx_0_0 00:17:52.968 valid_lft forever preferred_lft forever 00:17:52.968 17:34:19 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:52.968 17:34:19 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:17:52.968 17:34:19 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:52.968 17:34:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:52.968 17:34:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:52.968 17:34:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:52.968 17:34:19 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:52.968 17:34:19 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:52.968 17:34:19 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:17:52.968 83: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:52.968 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:17:52.968 altname enp9s0f1np1 00:17:52.968 inet 192.168.100.9/24 scope global mlx_0_1 00:17:52.968 valid_lft forever preferred_lft forever 00:17:52.968 17:34:19 -- nvmf/common.sh@411 -- # return 0 00:17:52.968 17:34:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:52.968 17:34:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:52.968 17:34:19 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:17:52.968 17:34:19 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:17:52.968 17:34:19 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:52.968 17:34:19 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:52.968 17:34:19 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:52.968 17:34:19 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:52.968 17:34:19 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:52.968 17:34:19 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:52.968 17:34:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:53.226 17:34:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:53.226 17:34:19 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:53.226 17:34:19 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:53.226 17:34:19 -- nvmf/common.sh@105 -- # continue 2 00:17:53.226 17:34:19 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:53.226 17:34:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:53.226 17:34:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:53.226 17:34:19 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:53.226 17:34:19 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:53.226 17:34:19 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:53.226 17:34:19 -- nvmf/common.sh@105 -- # continue 2 00:17:53.226 17:34:19 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:53.226 17:34:19 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:17:53.226 17:34:19 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:53.226 17:34:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:53.226 17:34:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:53.226 17:34:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:53.226 17:34:19 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:53.226 17:34:19 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:17:53.226 17:34:19 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:53.226 17:34:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:53.226 17:34:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:53.226 17:34:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:53.226 17:34:19 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:17:53.226 192.168.100.9' 00:17:53.226 17:34:19 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:53.226 192.168.100.9' 00:17:53.226 17:34:19 -- nvmf/common.sh@446 -- # head -n 1 00:17:53.226 17:34:19 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:53.226 17:34:19 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:17:53.226 192.168.100.9' 00:17:53.226 17:34:19 -- nvmf/common.sh@447 -- # tail -n +2 00:17:53.226 17:34:19 -- nvmf/common.sh@447 -- # head -n 1 00:17:53.226 17:34:19 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:53.226 17:34:19 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:17:53.226 17:34:19 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:53.226 17:34:19 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:17:53.226 17:34:19 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:17:53.226 17:34:19 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:17:53.226 17:34:19 -- host/fio.sh@14 -- # [[ y != y ]] 00:17:53.226 17:34:19 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:17:53.226 17:34:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:53.226 17:34:19 -- common/autotest_common.sh@10 -- # set +x 00:17:53.226 17:34:19 -- host/fio.sh@22 -- # nvmfpid=2049486 00:17:53.226 17:34:19 -- host/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:53.226 17:34:19 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:53.226 17:34:19 -- host/fio.sh@26 -- # waitforlisten 2049486 00:17:53.226 17:34:19 -- common/autotest_common.sh@817 -- # '[' -z 2049486 ']' 00:17:53.226 17:34:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.226 17:34:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:53.226 17:34:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.226 17:34:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:53.227 17:34:19 -- common/autotest_common.sh@10 -- # set +x 00:17:53.227 [2024-04-18 17:34:19.592185] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:17:53.227 [2024-04-18 17:34:19.592289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.227 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.227 [2024-04-18 17:34:19.655694] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.227 [2024-04-18 17:34:19.763283] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.227 [2024-04-18 17:34:19.763340] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.227 [2024-04-18 17:34:19.763379] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.227 [2024-04-18 17:34:19.763398] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.227 [2024-04-18 17:34:19.763410] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.227 [2024-04-18 17:34:19.763470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.227 [2024-04-18 17:34:19.763499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.227 [2024-04-18 17:34:19.763566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.227 [2024-04-18 17:34:19.763569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.485 17:34:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:53.485 17:34:19 -- common/autotest_common.sh@850 -- # return 0 00:17:53.485 17:34:19 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:53.485 17:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:53.485 17:34:19 -- common/autotest_common.sh@10 -- # set +x 00:17:53.485 [2024-04-18 17:34:19.911899] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd3cb60/0xd41050) succeed. 00:17:53.485 [2024-04-18 17:34:19.928181] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd3e150/0xd826e0) succeed. 00:17:53.743 17:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:53.743 17:34:20 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:17:53.743 17:34:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:53.743 17:34:20 -- common/autotest_common.sh@10 -- # set +x 00:17:53.743 17:34:20 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:53.743 17:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:53.743 17:34:20 -- common/autotest_common.sh@10 -- # set +x 00:17:53.743 Malloc1 00:17:53.743 17:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:53.743 17:34:20 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:53.743 17:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:53.743 17:34:20 -- common/autotest_common.sh@10 -- # set +x 00:17:53.743 17:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:53.743 17:34:20 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:53.743 17:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:53.743 17:34:20 -- common/autotest_common.sh@10 -- # set +x 00:17:53.743 17:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:53.743 17:34:20 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:53.743 17:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:53.743 17:34:20 -- common/autotest_common.sh@10 -- # set +x 00:17:53.743 [2024-04-18 17:34:20.152678] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:53.743 17:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:53.743 17:34:20 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:53.743 17:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:53.743 17:34:20 -- common/autotest_common.sh@10 -- # set +x 00:17:53.743 17:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:53.743 17:34:20 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:17:53.743 17:34:20 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:17:53.743 17:34:20 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:17:53.743 17:34:20 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:17:53.743 17:34:20 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:53.743 17:34:20 -- common/autotest_common.sh@1325 -- # local sanitizers 00:17:53.743 17:34:20 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:53.743 17:34:20 -- common/autotest_common.sh@1327 -- # shift 00:17:53.743 17:34:20 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:17:53.743 17:34:20 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:53.743 17:34:20 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:53.743 17:34:20 -- common/autotest_common.sh@1331 -- # grep libasan 00:17:53.743 17:34:20 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:53.743 17:34:20 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:53.743 17:34:20 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:53.743 17:34:20 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:53.743 17:34:20 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:53.743 17:34:20 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:17:53.743 17:34:20 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:53.743 17:34:20 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:53.743 17:34:20 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:53.743 17:34:20 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:53.743 17:34:20 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:17:54.003 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:54.003 fio-3.35 00:17:54.003 Starting 1 thread 00:17:54.003 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.531 00:17:56.531 test: (groupid=0, jobs=1): err= 0: pid=2049705: Thu Apr 18 17:34:22 2024 00:17:56.531 read: IOPS=13.9k, BW=54.3MiB/s (57.0MB/s)(109MiB/2005msec) 00:17:56.531 slat (nsec): min=1693, max=31652, avg=1935.24, stdev=847.03 00:17:56.531 clat (usec): min=1709, max=8186, avg=4573.25, stdev=140.01 00:17:56.531 lat (usec): min=1723, max=8187, avg=4575.18, stdev=139.98 00:17:56.531 clat percentiles (usec): 00:17:56.531 | 1.00th=[ 4424], 5.00th=[ 4490], 10.00th=[ 4490], 20.00th=[ 4490], 00:17:56.531 | 30.00th=[ 4555], 40.00th=[ 4555], 50.00th=[ 4555], 60.00th=[ 4555], 00:17:56.531 | 70.00th=[ 4621], 80.00th=[ 4621], 90.00th=[ 4621], 95.00th=[ 4686], 00:17:56.531 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 6849], 99.95th=[ 8029], 00:17:56.531 | 99.99th=[ 8160] 00:17:56.531 bw ( KiB/s): min=54256, max=56288, per=100.00%, avg=55644.00, stdev=936.46, samples=4 00:17:56.531 iops : min=13564, max=14072, avg=13911.00, stdev=234.11, samples=4 00:17:56.531 write: IOPS=13.9k, BW=54.4MiB/s (57.0MB/s)(109MiB/2005msec); 0 zone resets 00:17:56.531 slat (nsec): min=1765, max=33527, avg=2225.79, stdev=1395.91 00:17:56.531 clat (usec): min=2742, max=8210, avg=4570.97, stdev=136.47 00:17:56.531 lat (usec): min=2749, max=8211, avg=4573.19, stdev=136.44 00:17:56.531 clat percentiles (usec): 00:17:56.531 | 1.00th=[ 4424], 5.00th=[ 4490], 10.00th=[ 4490], 20.00th=[ 4490], 00:17:56.531 | 30.00th=[ 4555], 40.00th=[ 4555], 50.00th=[ 4555], 60.00th=[ 4555], 00:17:56.531 | 70.00th=[ 4621], 80.00th=[ 4621], 90.00th=[ 4621], 95.00th=[ 4686], 00:17:56.532 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 6783], 99.95th=[ 7504], 00:17:56.532 | 99.99th=[ 8225] 00:17:56.532 bw ( KiB/s): min=54568, max=56072, per=99.99%, avg=55660.00, stdev=730.66, samples=4 00:17:56.532 iops : min=13642, max=14018, avg=13915.00, stdev=182.67, samples=4 00:17:56.532 lat (msec) : 2=0.01%, 4=0.28%, 10=99.71% 00:17:56.532 cpu : usr=99.45%, sys=0.00%, ctx=16, majf=0, minf=13 00:17:56.532 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:56.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:56.532 issued rwts: total=27878,27901,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.532 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:56.532 00:17:56.532 Run status group 0 (all jobs): 00:17:56.532 READ: bw=54.3MiB/s (57.0MB/s), 54.3MiB/s-54.3MiB/s (57.0MB/s-57.0MB/s), io=109MiB (114MB), run=2005-2005msec 00:17:56.532 WRITE: bw=54.4MiB/s (57.0MB/s), 54.4MiB/s-54.4MiB/s (57.0MB/s-57.0MB/s), io=109MiB (114MB), run=2005-2005msec 00:17:56.532 17:34:22 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:17:56.532 17:34:22 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:17:56.532 17:34:22 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:17:56.532 17:34:22 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:56.532 17:34:22 -- common/autotest_common.sh@1325 -- # local sanitizers 00:17:56.532 17:34:22 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:56.532 17:34:22 -- common/autotest_common.sh@1327 -- # shift 00:17:56.532 17:34:22 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:17:56.532 17:34:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:56.532 17:34:22 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:56.532 17:34:22 -- common/autotest_common.sh@1331 -- # grep libasan 00:17:56.532 17:34:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:56.532 17:34:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:56.532 17:34:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:56.532 17:34:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:56.532 17:34:22 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:56.532 17:34:22 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:17:56.532 17:34:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:56.532 17:34:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:56.532 17:34:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:56.532 17:34:22 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:56.532 17:34:22 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:17:56.532 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:56.532 fio-3.35 00:17:56.532 Starting 1 thread 00:17:56.532 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.110 00:17:59.110 test: (groupid=0, jobs=1): err= 0: pid=2050158: Thu Apr 18 17:34:25 2024 00:17:59.110 read: IOPS=10.4k, BW=162MiB/s (170MB/s)(322MiB/1989msec) 00:17:59.110 slat (nsec): min=2805, max=31681, avg=3224.49, stdev=1316.43 00:17:59.110 clat (usec): min=355, max=10068, avg=2295.02, stdev=1457.41 00:17:59.110 lat (usec): min=359, max=10070, avg=2298.25, stdev=1457.61 00:17:59.110 clat percentiles (usec): 00:17:59.110 | 1.00th=[ 766], 5.00th=[ 1074], 10.00th=[ 1254], 20.00th=[ 1450], 00:17:59.110 | 30.00th=[ 1614], 40.00th=[ 1745], 50.00th=[ 1876], 60.00th=[ 2057], 00:17:59.110 | 70.00th=[ 2278], 80.00th=[ 2606], 90.00th=[ 3458], 95.00th=[ 6521], 00:17:59.110 | 99.00th=[ 8291], 99.50th=[ 8979], 99.90th=[ 9634], 99.95th=[ 9765], 00:17:59.110 | 99.99th=[10028] 00:17:59.110 bw ( KiB/s): min=75584, max=88576, per=49.46%, avg=82072.00, stdev=5305.27, samples=4 00:17:59.110 iops : min= 4724, max= 5536, avg=5129.50, stdev=331.58, samples=4 00:17:59.110 write: IOPS=5727, BW=89.5MiB/s (93.8MB/s)(167MiB/1866msec); 0 zone resets 00:17:59.110 slat (nsec): min=30119, max=69723, avg=33002.60, stdev=4517.24 00:17:59.110 clat (usec): min=5750, max=26431, avg=17953.15, stdev=2389.05 00:17:59.110 lat (usec): min=5780, max=26464, avg=17986.15, stdev=2388.99 00:17:59.110 clat percentiles (usec): 00:17:59.110 | 1.00th=[ 9765], 5.00th=[14746], 10.00th=[15401], 20.00th=[16188], 00:17:59.110 | 30.00th=[16712], 40.00th=[17171], 50.00th=[17957], 60.00th=[18482], 00:17:59.110 | 70.00th=[19006], 80.00th=[19792], 90.00th=[20841], 95.00th=[22152], 00:17:59.110 | 99.00th=[23725], 99.50th=[24249], 99.90th=[25035], 99.95th=[25297], 00:17:59.110 | 99.99th=[26346] 00:17:59.110 bw ( KiB/s): min=77664, max=92160, per=92.25%, avg=84536.00, stdev=6093.31, samples=4 00:17:59.110 iops : min= 4854, max= 5760, avg=5283.50, stdev=380.83, samples=4 00:17:59.110 lat (usec) : 500=0.04%, 750=0.49%, 1000=1.86% 00:17:59.110 lat (msec) : 2=35.23%, 4=22.61%, 10=5.98%, 20=27.88%, 50=5.90% 00:17:59.110 cpu : usr=97.26%, sys=1.35%, ctx=143, majf=0, minf=27 00:17:59.110 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:17:59.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:59.110 issued rwts: total=20629,10687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:59.110 00:17:59.110 Run status group 0 (all jobs): 00:17:59.110 READ: bw=162MiB/s (170MB/s), 162MiB/s-162MiB/s (170MB/s-170MB/s), io=322MiB (338MB), run=1989-1989msec 00:17:59.110 WRITE: bw=89.5MiB/s (93.8MB/s), 89.5MiB/s-89.5MiB/s (93.8MB/s-93.8MB/s), io=167MiB (175MB), run=1866-1866msec 00:17:59.110 17:34:25 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:59.110 17:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:59.110 17:34:25 -- common/autotest_common.sh@10 -- # set +x 00:17:59.110 17:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:59.110 17:34:25 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:17:59.110 17:34:25 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:17:59.110 17:34:25 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:17:59.110 17:34:25 -- host/fio.sh@84 -- # nvmftestfini 00:17:59.110 17:34:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:59.110 17:34:25 -- nvmf/common.sh@117 -- # sync 00:17:59.110 17:34:25 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:59.110 17:34:25 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:59.110 17:34:25 -- nvmf/common.sh@120 -- # set +e 00:17:59.110 17:34:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:59.110 17:34:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:59.110 rmmod nvme_rdma 00:17:59.110 rmmod nvme_fabrics 00:17:59.110 17:34:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:59.110 17:34:25 -- nvmf/common.sh@124 -- # set -e 00:17:59.110 17:34:25 -- nvmf/common.sh@125 -- # return 0 00:17:59.110 17:34:25 -- nvmf/common.sh@478 -- # '[' -n 2049486 ']' 00:17:59.110 17:34:25 -- nvmf/common.sh@479 -- # killprocess 2049486 00:17:59.110 17:34:25 -- common/autotest_common.sh@936 -- # '[' -z 2049486 ']' 00:17:59.110 17:34:25 -- common/autotest_common.sh@940 -- # kill -0 2049486 00:17:59.110 17:34:25 -- common/autotest_common.sh@941 -- # uname 00:17:59.110 17:34:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:59.110 17:34:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2049486 00:17:59.110 17:34:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:59.110 17:34:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:59.110 17:34:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2049486' 00:17:59.110 killing process with pid 2049486 00:17:59.110 17:34:25 -- common/autotest_common.sh@955 -- # kill 2049486 00:17:59.110 17:34:25 -- common/autotest_common.sh@960 -- # wait 2049486 00:17:59.368 17:34:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:59.368 17:34:25 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:17:59.368 00:17:59.368 real 0m8.296s 00:17:59.368 user 0m29.076s 00:17:59.368 sys 0m2.070s 00:17:59.368 17:34:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:59.368 17:34:25 -- common/autotest_common.sh@10 -- # set +x 00:17:59.368 ************************************ 00:17:59.368 END TEST nvmf_fio_host 00:17:59.368 ************************************ 00:17:59.368 17:34:25 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:17:59.368 17:34:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:59.368 17:34:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:59.368 17:34:25 -- common/autotest_common.sh@10 -- # set +x 00:17:59.368 ************************************ 00:17:59.368 START TEST nvmf_failover 00:17:59.368 ************************************ 00:17:59.368 17:34:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:17:59.368 * Looking for test storage... 00:17:59.368 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:59.368 17:34:25 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:59.368 17:34:25 -- nvmf/common.sh@7 -- # uname -s 00:17:59.368 17:34:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.368 17:34:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.368 17:34:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.368 17:34:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.368 17:34:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.368 17:34:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.368 17:34:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.368 17:34:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.368 17:34:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.628 17:34:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.628 17:34:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:59.628 17:34:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:59.628 17:34:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.628 17:34:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.628 17:34:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:59.628 17:34:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.628 17:34:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:59.628 17:34:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.628 17:34:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.628 17:34:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.628 17:34:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.628 17:34:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.628 17:34:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.628 17:34:25 -- paths/export.sh@5 -- # export PATH 00:17:59.628 17:34:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.628 17:34:25 -- nvmf/common.sh@47 -- # : 0 00:17:59.628 17:34:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:59.628 17:34:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:59.628 17:34:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.628 17:34:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.628 17:34:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.628 17:34:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:59.628 17:34:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:59.628 17:34:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:59.629 17:34:25 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:59.629 17:34:25 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:59.629 17:34:25 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:59.629 17:34:25 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.629 17:34:25 -- host/failover.sh@18 -- # nvmftestinit 00:17:59.629 17:34:25 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:17:59.629 17:34:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.629 17:34:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:59.629 17:34:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:59.629 17:34:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:59.629 17:34:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.629 17:34:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.629 17:34:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.629 17:34:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:59.629 17:34:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:59.629 17:34:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:59.629 17:34:25 -- common/autotest_common.sh@10 -- # set +x 00:18:01.532 17:34:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:01.532 17:34:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:01.532 17:34:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:01.532 17:34:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:01.532 17:34:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:01.532 17:34:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:01.532 17:34:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:01.532 17:34:27 -- nvmf/common.sh@295 -- # net_devs=() 00:18:01.532 17:34:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:01.532 17:34:27 -- nvmf/common.sh@296 -- # e810=() 00:18:01.532 17:34:27 -- nvmf/common.sh@296 -- # local -ga e810 00:18:01.532 17:34:27 -- nvmf/common.sh@297 -- # x722=() 00:18:01.532 17:34:27 -- nvmf/common.sh@297 -- # local -ga x722 00:18:01.532 17:34:27 -- nvmf/common.sh@298 -- # mlx=() 00:18:01.533 17:34:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:01.533 17:34:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.533 17:34:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.533 17:34:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.533 17:34:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.533 17:34:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.533 17:34:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.533 17:34:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.533 17:34:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.533 17:34:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.533 17:34:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.533 17:34:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.533 17:34:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:01.533 17:34:27 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:01.533 17:34:27 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:01.533 17:34:27 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:01.533 17:34:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:01.533 17:34:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.533 17:34:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:18:01.533 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:18:01.533 17:34:27 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:01.533 17:34:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.533 17:34:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:18:01.533 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:18:01.533 17:34:27 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:01.533 17:34:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:01.533 17:34:27 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.533 17:34:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.533 17:34:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:01.533 17:34:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.533 17:34:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:18:01.533 Found net devices under 0000:09:00.0: mlx_0_0 00:18:01.533 17:34:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.533 17:34:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.533 17:34:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.533 17:34:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:01.533 17:34:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.533 17:34:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:18:01.533 Found net devices under 0000:09:00.1: mlx_0_1 00:18:01.533 17:34:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.533 17:34:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:01.533 17:34:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:01.533 17:34:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:01.533 17:34:27 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:01.533 17:34:27 -- nvmf/common.sh@58 -- # uname 00:18:01.533 17:34:27 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:01.533 17:34:27 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:01.533 17:34:27 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:01.533 17:34:27 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:01.533 17:34:27 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:01.533 17:34:27 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:01.533 17:34:27 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:01.533 17:34:27 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:01.533 17:34:27 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:01.533 17:34:27 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:01.533 17:34:27 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:01.533 17:34:27 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:01.533 17:34:27 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:01.533 17:34:27 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:01.533 17:34:27 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:01.533 17:34:27 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:01.533 17:34:27 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:01.533 17:34:27 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.533 17:34:27 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:01.533 17:34:27 -- nvmf/common.sh@105 -- # continue 2 00:18:01.533 17:34:27 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:01.533 17:34:27 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.533 17:34:27 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.533 17:34:27 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:01.533 17:34:27 -- nvmf/common.sh@105 -- # continue 2 00:18:01.533 17:34:27 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:01.533 17:34:27 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:01.533 17:34:27 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:01.533 17:34:27 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:01.533 17:34:27 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:01.533 17:34:27 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:01.533 17:34:27 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:01.533 17:34:27 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:01.533 82: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:01.533 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:18:01.533 altname enp9s0f0np0 00:18:01.533 inet 192.168.100.8/24 scope global mlx_0_0 00:18:01.533 valid_lft forever preferred_lft forever 00:18:01.533 17:34:27 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:01.533 17:34:27 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:01.533 17:34:27 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:01.533 17:34:27 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:01.533 17:34:27 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:01.533 17:34:27 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:01.533 17:34:27 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:01.533 17:34:27 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:01.533 83: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:01.533 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:18:01.533 altname enp9s0f1np1 00:18:01.533 inet 192.168.100.9/24 scope global mlx_0_1 00:18:01.533 valid_lft forever preferred_lft forever 00:18:01.533 17:34:27 -- nvmf/common.sh@411 -- # return 0 00:18:01.533 17:34:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:01.533 17:34:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:01.533 17:34:27 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:01.533 17:34:27 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:01.533 17:34:27 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:01.533 17:34:27 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:01.533 17:34:27 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:01.533 17:34:27 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:01.533 17:34:27 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:01.533 17:34:27 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:01.533 17:34:27 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.533 17:34:27 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:01.533 17:34:27 -- nvmf/common.sh@105 -- # continue 2 00:18:01.533 17:34:27 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:01.533 17:34:27 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.533 17:34:27 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:01.533 17:34:27 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:01.533 17:34:27 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:01.533 17:34:27 -- nvmf/common.sh@105 -- # continue 2 00:18:01.533 17:34:27 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:01.533 17:34:27 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:01.533 17:34:27 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:01.533 17:34:27 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:01.533 17:34:27 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:01.533 17:34:27 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:01.533 17:34:27 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:01.533 17:34:27 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:01.533 17:34:27 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:01.533 17:34:27 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:01.533 17:34:27 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:01.533 17:34:27 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:01.533 17:34:27 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:01.533 192.168.100.9' 00:18:01.533 17:34:27 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:01.533 192.168.100.9' 00:18:01.533 17:34:27 -- nvmf/common.sh@446 -- # head -n 1 00:18:01.533 17:34:27 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:01.533 17:34:27 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:01.533 192.168.100.9' 00:18:01.533 17:34:27 -- nvmf/common.sh@447 -- # tail -n +2 00:18:01.533 17:34:27 -- nvmf/common.sh@447 -- # head -n 1 00:18:01.533 17:34:27 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:01.533 17:34:27 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:01.533 17:34:27 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:01.533 17:34:27 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:01.533 17:34:27 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:01.533 17:34:27 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:01.533 17:34:27 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:01.533 17:34:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:01.533 17:34:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:01.533 17:34:27 -- common/autotest_common.sh@10 -- # set +x 00:18:01.533 17:34:27 -- nvmf/common.sh@470 -- # nvmfpid=2052099 00:18:01.533 17:34:27 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:01.533 17:34:27 -- nvmf/common.sh@471 -- # waitforlisten 2052099 00:18:01.533 17:34:27 -- common/autotest_common.sh@817 -- # '[' -z 2052099 ']' 00:18:01.533 17:34:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.533 17:34:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:01.533 17:34:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.533 17:34:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:01.533 17:34:27 -- common/autotest_common.sh@10 -- # set +x 00:18:01.533 [2024-04-18 17:34:27.924321] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:18:01.533 [2024-04-18 17:34:27.924430] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.533 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.533 [2024-04-18 17:34:27.981535] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:01.791 [2024-04-18 17:34:28.086599] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.791 [2024-04-18 17:34:28.086656] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.791 [2024-04-18 17:34:28.086671] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.791 [2024-04-18 17:34:28.086683] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.791 [2024-04-18 17:34:28.086693] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.791 [2024-04-18 17:34:28.086781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.791 [2024-04-18 17:34:28.086844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:01.791 [2024-04-18 17:34:28.086847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.791 17:34:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:01.791 17:34:28 -- common/autotest_common.sh@850 -- # return 0 00:18:01.791 17:34:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:01.791 17:34:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:01.791 17:34:28 -- common/autotest_common.sh@10 -- # set +x 00:18:01.791 17:34:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.791 17:34:28 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:02.049 [2024-04-18 17:34:28.453793] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2027380/0x202b870) succeed. 00:18:02.049 [2024-04-18 17:34:28.464333] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20288d0/0x206cf00) succeed. 00:18:02.308 17:34:28 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:02.308 Malloc0 00:18:02.569 17:34:28 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:02.827 17:34:29 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:02.827 17:34:29 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:03.084 [2024-04-18 17:34:29.569751] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:03.084 17:34:29 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:18:03.342 [2024-04-18 17:34:29.826482] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:03.342 17:34:29 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:18:03.600 [2024-04-18 17:34:30.111585] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:18:03.600 17:34:30 -- host/failover.sh@31 -- # bdevperf_pid=2052386 00:18:03.600 17:34:30 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:03.600 17:34:30 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:03.600 17:34:30 -- host/failover.sh@34 -- # waitforlisten 2052386 /var/tmp/bdevperf.sock 00:18:03.600 17:34:30 -- common/autotest_common.sh@817 -- # '[' -z 2052386 ']' 00:18:03.600 17:34:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.600 17:34:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:03.600 17:34:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.600 17:34:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:03.600 17:34:30 -- common/autotest_common.sh@10 -- # set +x 00:18:04.167 17:34:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:04.167 17:34:30 -- common/autotest_common.sh@850 -- # return 0 00:18:04.167 17:34:30 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:04.424 NVMe0n1 00:18:04.424 17:34:30 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:04.682 00:18:04.682 17:34:31 -- host/failover.sh@39 -- # run_test_pid=2052514 00:18:04.682 17:34:31 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:04.682 17:34:31 -- host/failover.sh@41 -- # sleep 1 00:18:05.616 17:34:32 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:06.184 17:34:32 -- host/failover.sh@45 -- # sleep 3 00:18:09.473 17:34:35 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:09.473 00:18:09.473 17:34:35 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:18:09.473 17:34:35 -- host/failover.sh@50 -- # sleep 3 00:18:12.761 17:34:38 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:12.761 [2024-04-18 17:34:39.217489] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:12.761 17:34:39 -- host/failover.sh@55 -- # sleep 1 00:18:14.140 17:34:40 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:18:14.140 17:34:40 -- host/failover.sh@59 -- # wait 2052514 00:18:20.717 0 00:18:20.717 17:34:46 -- host/failover.sh@61 -- # killprocess 2052386 00:18:20.717 17:34:46 -- common/autotest_common.sh@936 -- # '[' -z 2052386 ']' 00:18:20.717 17:34:46 -- common/autotest_common.sh@940 -- # kill -0 2052386 00:18:20.717 17:34:46 -- common/autotest_common.sh@941 -- # uname 00:18:20.717 17:34:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:20.717 17:34:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2052386 00:18:20.717 17:34:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:20.717 17:34:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:20.717 17:34:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2052386' 00:18:20.717 killing process with pid 2052386 00:18:20.717 17:34:46 -- common/autotest_common.sh@955 -- # kill 2052386 00:18:20.717 17:34:46 -- common/autotest_common.sh@960 -- # wait 2052386 00:18:20.717 17:34:46 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:20.717 [2024-04-18 17:34:30.171319] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:18:20.717 [2024-04-18 17:34:30.171439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052386 ] 00:18:20.717 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.717 [2024-04-18 17:34:30.233500] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.717 [2024-04-18 17:34:30.340068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.717 Running I/O for 15 seconds... 00:18:20.717 [2024-04-18 17:34:33.407897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x187200 00:18:20.717 [2024-04-18 17:34:33.407965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.717 [2024-04-18 17:34:33.408010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x187200 00:18:20.717 [2024-04-18 17:34:33.408026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.717 [2024-04-18 17:34:33.408044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x187200 00:18:20.717 [2024-04-18 17:34:33.408058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.717 [2024-04-18 17:34:33.408074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x187200 00:18:20.717 [2024-04-18 17:34:33.408088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.717 [2024-04-18 17:34:33.408104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x187200 00:18:20.717 [2024-04-18 17:34:33.408118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.717 [2024-04-18 17:34:33.408134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x187200 00:18:20.717 [2024-04-18 17:34:33.408148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.717 [2024-04-18 17:34:33.408164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x187200 00:18:20.717 [2024-04-18 17:34:33.408178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.717 [2024-04-18 17:34:33.408194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x187200 00:18:20.717 [2024-04-18 17:34:33.408208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.717 [2024-04-18 17:34:33.408223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x187200 00:18:20.717 [2024-04-18 17:34:33.408238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.717 [2024-04-18 17:34:33.408253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x187200 00:18:20.717 [2024-04-18 17:34:33.408268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.408972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.408986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.409001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.409018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.409035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.409049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.409064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.409078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.409093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.409106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.409121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.409134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.409149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.409163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.409177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.409191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.409206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.409219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.409235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.409248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.409263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.409276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.409291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.409304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.409319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.409336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.718 [2024-04-18 17:34:33.409352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x187200 00:18:20.718 [2024-04-18 17:34:33.409366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.409984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.409998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.410014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.410028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.410046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.410061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.410076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.410089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.410104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.410118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.410133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.410147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.410162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.410175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.410190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.410203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.410218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.410232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.410247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.410260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.410275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.410289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.410304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.410317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.410333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.410347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.410362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.410379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.410418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.410433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.410449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187200 00:18:20.719 [2024-04-18 17:34:33.410463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.719 [2024-04-18 17:34:33.410479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.410979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.410994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.411023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.411051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.411087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.411117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.411145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.411174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.411203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.411232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.411260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.411288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.411316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.411345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.411373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.411424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.411458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.720 [2024-04-18 17:34:33.411489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x187200 00:18:20.720 [2024-04-18 17:34:33.411504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:33.411520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:33.411534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:33.411550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:33.411564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:33.411579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:33.411593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:33.411609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:33.411624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:33.411640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:33.411654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:33.411669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:33.411683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:33.411714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:33.411728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:33.411742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:33.411756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:33.411771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:33.411789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:33.411804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.721 [2024-04-18 17:34:33.411818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:33.411833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.721 [2024-04-18 17:34:33.411847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:33.411861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.721 [2024-04-18 17:34:33.411875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:33.411889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.721 [2024-04-18 17:34:33.411903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:33.413321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:20.721 [2024-04-18 17:34:33.413342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:20.721 [2024-04-18 17:34:33.413355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:8 PRP1 0x0 PRP2 0x0 00:18:20.721 [2024-04-18 17:34:33.413369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:33.413438] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a00 was disconnected and freed. reset controller. 00:18:20.721 [2024-04-18 17:34:33.413460] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:18:20.721 [2024-04-18 17:34:33.413475] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:20.721 [2024-04-18 17:34:33.416794] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:20.721 [2024-04-18 17:34:33.433190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:20.721 [2024-04-18 17:34:33.478482] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:20.721 [2024-04-18 17:34:36.990583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:36.990645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.990672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:36.990689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.990706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:36.990721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.990737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:36.990760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.990777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:36.990791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.990806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:36.990820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.990836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:36.990851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.990866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:36.990880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.990895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:36.990909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.990925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:36.990938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.990954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:36.990968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.990983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:36.990996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.991012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:36.991026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.991041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x187200 00:18:20.721 [2024-04-18 17:34:36.991054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.991069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.721 [2024-04-18 17:34:36.991083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.991102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.721 [2024-04-18 17:34:36.991116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.991131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.721 [2024-04-18 17:34:36.991145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.991160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.721 [2024-04-18 17:34:36.991174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.721 [2024-04-18 17:34:36.991189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.722 [2024-04-18 17:34:36.991202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.722 [2024-04-18 17:34:36.991217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.722 [2024-04-18 17:34:36.991231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.722 [2024-04-18 17:34:36.991246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.722 [2024-04-18 17:34:36.991259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.722 [2024-04-18 17:34:36.991274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.722 [2024-04-18 17:34:36.991287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.722 [2024-04-18 17:34:36.991302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:42056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.723 [2024-04-18 17:34:36.991923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x187200 00:18:20.723 [2024-04-18 17:34:36.991952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.723 [2024-04-18 17:34:36.991968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x187200 00:18:20.723 [2024-04-18 17:34:36.991982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.991998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187200 00:18:20.724 [2024-04-18 17:34:36.992012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x187200 00:18:20.724 [2024-04-18 17:34:36.992041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187200 00:18:20.724 [2024-04-18 17:34:36.992071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x187200 00:18:20.724 [2024-04-18 17:34:36.992100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187200 00:18:20.724 [2024-04-18 17:34:36.992129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187200 00:18:20.724 [2024-04-18 17:34:36.992158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x187200 00:18:20.724 [2024-04-18 17:34:36.992307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x187200 00:18:20.724 [2024-04-18 17:34:36.992336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187200 00:18:20.724 [2024-04-18 17:34:36.992364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x187200 00:18:20.724 [2024-04-18 17:34:36.992417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.992972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.992987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.993001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.993016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.993029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.993044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.993058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.993073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.724 [2024-04-18 17:34:36.993087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.724 [2024-04-18 17:34:36.993102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.725 [2024-04-18 17:34:36.993115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.725 [2024-04-18 17:34:36.993144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.725 [2024-04-18 17:34:36.993685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.725 [2024-04-18 17:34:36.993730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.725 [2024-04-18 17:34:36.993759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.725 [2024-04-18 17:34:36.993788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.725 [2024-04-18 17:34:36.993816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.725 [2024-04-18 17:34:36.993844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.725 [2024-04-18 17:34:36.993874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.725 [2024-04-18 17:34:36.993903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.993976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.993990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.994005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.994019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.994038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.994052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.994067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.994081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.994096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.994110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.994125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.994139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.994154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.994167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.994182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.994195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.725 [2024-04-18 17:34:36.994211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187200 00:18:20.725 [2024-04-18 17:34:36.994225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:36.994240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187200 00:18:20.726 [2024-04-18 17:34:36.994253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:36.994268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:36.994282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:36.994297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:36.994311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:36.994325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:36.994339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:36.994354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:36.994374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:36.994413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:36.994429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:36.994445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:36.994460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:36.994475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:36.994490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:36.994505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:36.994519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:36.994535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:36.994548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:36.995995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:20.726 [2024-04-18 17:34:36.996024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:20.726 [2024-04-18 17:34:36.996038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42472 len:8 PRP1 0x0 PRP2 0x0 00:18:20.726 [2024-04-18 17:34:36.996052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:36.996125] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4940 was disconnected and freed. reset controller. 00:18:20.726 [2024-04-18 17:34:36.996143] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:18:20.726 [2024-04-18 17:34:36.996159] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:20.726 [2024-04-18 17:34:36.999430] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:20.726 [2024-04-18 17:34:37.015781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:20.726 [2024-04-18 17:34:37.059878] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:20.726 [2024-04-18 17:34:41.489425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x187200 00:18:20.726 [2024-04-18 17:34:41.489485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x187200 00:18:20.726 [2024-04-18 17:34:41.489530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x187200 00:18:20.726 [2024-04-18 17:34:41.489570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x187200 00:18:20.726 [2024-04-18 17:34:41.489600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x187200 00:18:20.726 [2024-04-18 17:34:41.489630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x187200 00:18:20.726 [2024-04-18 17:34:41.489659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:41.489688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:41.489717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:41.489746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x187200 00:18:20.726 [2024-04-18 17:34:41.489774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x187200 00:18:20.726 [2024-04-18 17:34:41.489804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x187200 00:18:20.726 [2024-04-18 17:34:41.489833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x187200 00:18:20.726 [2024-04-18 17:34:41.489862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x187200 00:18:20.726 [2024-04-18 17:34:41.489890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x187200 00:18:20.726 [2024-04-18 17:34:41.489924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x187200 00:18:20.726 [2024-04-18 17:34:41.489953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x187200 00:18:20.726 [2024-04-18 17:34:41.489982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.489997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:41.490010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.490025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:41.490039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.490053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:41.490067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.490082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:41.490096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.490111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:41.490124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.726 [2024-04-18 17:34:41.490140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.726 [2024-04-18 17:34:41.490153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.727 [2024-04-18 17:34:41.490181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.727 [2024-04-18 17:34:41.490210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.727 [2024-04-18 17:34:41.490238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.727 [2024-04-18 17:34:41.490271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.727 [2024-04-18 17:34:41.490300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.727 [2024-04-18 17:34:41.490329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.727 [2024-04-18 17:34:41.490358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x187200 00:18:20.727 [2024-04-18 17:34:41.490410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x187200 00:18:20.727 [2024-04-18 17:34:41.490447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x187200 00:18:20.727 [2024-04-18 17:34:41.490476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x187200 00:18:20.727 [2024-04-18 17:34:41.490506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x187200 00:18:20.727 [2024-04-18 17:34:41.490536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x187200 00:18:20.727 [2024-04-18 17:34:41.490566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:57688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x187200 00:18:20.727 [2024-04-18 17:34:41.490597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x187200 00:18:20.727 [2024-04-18 17:34:41.490630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x187200 00:18:20.727 [2024-04-18 17:34:41.490661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x187200 00:18:20.727 [2024-04-18 17:34:41.490692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187200 00:18:20.727 [2024-04-18 17:34:41.490736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x187200 00:18:20.727 [2024-04-18 17:34:41.490766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x187200 00:18:20.727 [2024-04-18 17:34:41.490794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x187200 00:18:20.727 [2024-04-18 17:34:41.490823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x187200 00:18:20.727 [2024-04-18 17:34:41.490852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x187200 00:18:20.727 [2024-04-18 17:34:41.490881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.727 [2024-04-18 17:34:41.490910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.727 [2024-04-18 17:34:41.490939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.727 [2024-04-18 17:34:41.490968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.490986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.727 [2024-04-18 17:34:41.491000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.491015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.727 [2024-04-18 17:34:41.491029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.491044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.727 [2024-04-18 17:34:41.491058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.491073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.727 [2024-04-18 17:34:41.491086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.491102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.727 [2024-04-18 17:34:41.491115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.727 [2024-04-18 17:34:41.491130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x187200 00:18:20.728 [2024-04-18 17:34:41.491144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x187200 00:18:20.728 [2024-04-18 17:34:41.491172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x187200 00:18:20.728 [2024-04-18 17:34:41.491201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x187200 00:18:20.728 [2024-04-18 17:34:41.491230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x187200 00:18:20.728 [2024-04-18 17:34:41.491259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x187200 00:18:20.728 [2024-04-18 17:34:41.491288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x187200 00:18:20.728 [2024-04-18 17:34:41.491320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x187200 00:18:20.728 [2024-04-18 17:34:41.491350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x187200 00:18:20.728 [2024-04-18 17:34:41.491640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:57840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x187200 00:18:20.728 [2024-04-18 17:34:41.491671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x187200 00:18:20.728 [2024-04-18 17:34:41.491719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.491980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.491994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.492008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.492023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.492037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.492052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.492066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.492084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.492098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.492113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.492127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.728 [2024-04-18 17:34:41.492142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.728 [2024-04-18 17:34:41.492156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.729 [2024-04-18 17:34:41.492185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:57872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.729 [2024-04-18 17:34:41.492732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.729 [2024-04-18 17:34:41.492761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.729 [2024-04-18 17:34:41.492789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.729 [2024-04-18 17:34:41.492825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.729 [2024-04-18 17:34:41.492853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.729 [2024-04-18 17:34:41.492882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.729 [2024-04-18 17:34:41.492911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.729 [2024-04-18 17:34:41.492940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.492985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.492999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.493015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.493028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.493044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.493057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.493072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.493085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.493100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.493114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.493130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.493147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.493164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.493177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.493192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.493206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.493221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.493235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.493250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.493263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.729 [2024-04-18 17:34:41.493278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x187200 00:18:20.729 [2024-04-18 17:34:41.493291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.730 [2024-04-18 17:34:41.493306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x187200 00:18:20.730 [2024-04-18 17:34:41.493320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.730 [2024-04-18 17:34:41.493334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:20.730 [2024-04-18 17:34:41.493348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32766 cdw0:3eff200 sqhd:03c0 p:0 m:0 dnr:0 00:18:20.730 [2024-04-18 17:34:41.494736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:20.730 [2024-04-18 17:34:41.494757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:20.730 [2024-04-18 17:34:41.494769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58544 len:8 PRP1 0x0 PRP2 0x0 00:18:20.730 [2024-04-18 17:34:41.494783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.730 [2024-04-18 17:34:41.494836] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4940 was disconnected and freed. reset controller. 00:18:20.730 [2024-04-18 17:34:41.494855] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:18:20.730 [2024-04-18 17:34:41.494870] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:20.730 [2024-04-18 17:34:41.498144] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:20.730 [2024-04-18 17:34:41.514053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:20.730 [2024-04-18 17:34:41.561747] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:20.730 00:18:20.730 Latency(us) 00:18:20.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.730 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:20.730 Verification LBA range: start 0x0 length 0x4000 00:18:20.730 NVMe0n1 : 15.01 11519.72 45.00 245.57 0.00 10855.58 591.64 1019060.53 00:18:20.730 =================================================================================================================== 00:18:20.730 Total : 11519.72 45.00 245.57 0.00 10855.58 591.64 1019060.53 00:18:20.730 Received shutdown signal, test time was about 15.000000 seconds 00:18:20.730 00:18:20.730 Latency(us) 00:18:20.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.730 =================================================================================================================== 00:18:20.730 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:20.730 17:34:46 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:20.730 17:34:46 -- host/failover.sh@65 -- # count=3 00:18:20.730 17:34:46 -- host/failover.sh@67 -- # (( count != 3 )) 00:18:20.730 17:34:46 -- host/failover.sh@73 -- # bdevperf_pid=2054248 00:18:20.730 17:34:46 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:20.730 17:34:46 -- host/failover.sh@75 -- # waitforlisten 2054248 /var/tmp/bdevperf.sock 00:18:20.730 17:34:46 -- common/autotest_common.sh@817 -- # '[' -z 2054248 ']' 00:18:20.730 17:34:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:20.730 17:34:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:20.730 17:34:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:20.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:20.730 17:34:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:20.730 17:34:46 -- common/autotest_common.sh@10 -- # set +x 00:18:20.730 17:34:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:20.730 17:34:46 -- common/autotest_common.sh@850 -- # return 0 00:18:20.730 17:34:46 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:18:20.730 [2024-04-18 17:34:47.144323] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:20.730 17:34:47 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:18:21.013 [2024-04-18 17:34:47.389087] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:18:21.014 17:34:47 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:21.271 NVMe0n1 00:18:21.271 17:34:47 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:21.529 00:18:21.529 17:34:48 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:21.787 00:18:22.045 17:34:48 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:22.045 17:34:48 -- host/failover.sh@82 -- # grep -q NVMe0 00:18:22.045 17:34:48 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:22.305 17:34:48 -- host/failover.sh@87 -- # sleep 3 00:18:25.597 17:34:51 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:25.597 17:34:51 -- host/failover.sh@88 -- # grep -q NVMe0 00:18:25.597 17:34:52 -- host/failover.sh@90 -- # run_test_pid=2054920 00:18:25.597 17:34:52 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:25.597 17:34:52 -- host/failover.sh@92 -- # wait 2054920 00:18:26.974 0 00:18:26.974 17:34:53 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:26.974 [2024-04-18 17:34:46.656056] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:18:26.974 [2024-04-18 17:34:46.656134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054248 ] 00:18:26.974 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.974 [2024-04-18 17:34:46.720424] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.974 [2024-04-18 17:34:46.823952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.974 [2024-04-18 17:34:48.792038] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:18:26.974 [2024-04-18 17:34:48.792575] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:26.974 [2024-04-18 17:34:48.792643] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:26.974 [2024-04-18 17:34:48.810648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:26.974 [2024-04-18 17:34:48.827210] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:26.974 Running I/O for 1 seconds... 00:18:26.974 00:18:26.974 Latency(us) 00:18:26.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.974 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:26.974 Verification LBA range: start 0x0 length 0x4000 00:18:26.974 NVMe0n1 : 1.00 14527.39 56.75 0.00 0.00 8758.52 188.11 19223.89 00:18:26.974 =================================================================================================================== 00:18:26.974 Total : 14527.39 56.75 0.00 0.00 8758.52 188.11 19223.89 00:18:26.974 17:34:53 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:26.974 17:34:53 -- host/failover.sh@95 -- # grep -q NVMe0 00:18:26.974 17:34:53 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:27.542 17:34:53 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:27.542 17:34:53 -- host/failover.sh@99 -- # grep -q NVMe0 00:18:27.542 17:34:54 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:27.800 17:34:54 -- host/failover.sh@101 -- # sleep 3 00:18:31.089 17:34:57 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:31.089 17:34:57 -- host/failover.sh@103 -- # grep -q NVMe0 00:18:31.089 17:34:57 -- host/failover.sh@108 -- # killprocess 2054248 00:18:31.089 17:34:57 -- common/autotest_common.sh@936 -- # '[' -z 2054248 ']' 00:18:31.089 17:34:57 -- common/autotest_common.sh@940 -- # kill -0 2054248 00:18:31.089 17:34:57 -- common/autotest_common.sh@941 -- # uname 00:18:31.089 17:34:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:31.089 17:34:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2054248 00:18:31.089 17:34:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:31.090 17:34:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:31.090 17:34:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2054248' 00:18:31.090 killing process with pid 2054248 00:18:31.090 17:34:57 -- common/autotest_common.sh@955 -- # kill 2054248 00:18:31.090 17:34:57 -- common/autotest_common.sh@960 -- # wait 2054248 00:18:31.348 17:34:57 -- host/failover.sh@110 -- # sync 00:18:31.348 17:34:57 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:31.606 17:34:58 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:31.606 17:34:58 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:31.606 17:34:58 -- host/failover.sh@116 -- # nvmftestfini 00:18:31.606 17:34:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:31.606 17:34:58 -- nvmf/common.sh@117 -- # sync 00:18:31.606 17:34:58 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:31.606 17:34:58 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:31.606 17:34:58 -- nvmf/common.sh@120 -- # set +e 00:18:31.606 17:34:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:31.606 17:34:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:31.606 rmmod nvme_rdma 00:18:31.606 rmmod nvme_fabrics 00:18:31.606 17:34:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:31.606 17:34:58 -- nvmf/common.sh@124 -- # set -e 00:18:31.606 17:34:58 -- nvmf/common.sh@125 -- # return 0 00:18:31.606 17:34:58 -- nvmf/common.sh@478 -- # '[' -n 2052099 ']' 00:18:31.606 17:34:58 -- nvmf/common.sh@479 -- # killprocess 2052099 00:18:31.606 17:34:58 -- common/autotest_common.sh@936 -- # '[' -z 2052099 ']' 00:18:31.606 17:34:58 -- common/autotest_common.sh@940 -- # kill -0 2052099 00:18:31.606 17:34:58 -- common/autotest_common.sh@941 -- # uname 00:18:31.606 17:34:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:31.606 17:34:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2052099 00:18:31.866 17:34:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:31.866 17:34:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:31.866 17:34:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2052099' 00:18:31.866 killing process with pid 2052099 00:18:31.866 17:34:58 -- common/autotest_common.sh@955 -- # kill 2052099 00:18:31.866 17:34:58 -- common/autotest_common.sh@960 -- # wait 2052099 00:18:32.124 17:34:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:32.124 17:34:58 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:32.124 00:18:32.124 real 0m32.677s 00:18:32.124 user 2m4.597s 00:18:32.124 sys 0m3.663s 00:18:32.124 17:34:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:32.124 17:34:58 -- common/autotest_common.sh@10 -- # set +x 00:18:32.124 ************************************ 00:18:32.124 END TEST nvmf_failover 00:18:32.124 ************************************ 00:18:32.124 17:34:58 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:18:32.124 17:34:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:32.124 17:34:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:32.124 17:34:58 -- common/autotest_common.sh@10 -- # set +x 00:18:32.124 ************************************ 00:18:32.124 START TEST nvmf_discovery 00:18:32.124 ************************************ 00:18:32.124 17:34:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:18:32.383 * Looking for test storage... 00:18:32.383 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:32.383 17:34:58 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.383 17:34:58 -- nvmf/common.sh@7 -- # uname -s 00:18:32.383 17:34:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.383 17:34:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.383 17:34:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.383 17:34:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.383 17:34:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.383 17:34:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.383 17:34:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.383 17:34:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.383 17:34:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.383 17:34:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.383 17:34:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.383 17:34:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.383 17:34:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.383 17:34:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.383 17:34:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.383 17:34:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.383 17:34:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:32.383 17:34:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.383 17:34:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.383 17:34:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.383 17:34:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.383 17:34:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.383 17:34:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.383 17:34:58 -- paths/export.sh@5 -- # export PATH 00:18:32.383 17:34:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.383 17:34:58 -- nvmf/common.sh@47 -- # : 0 00:18:32.383 17:34:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:32.383 17:34:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:32.383 17:34:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.383 17:34:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.383 17:34:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.383 17:34:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:32.383 17:34:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:32.383 17:34:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:32.383 17:34:58 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:18:32.383 17:34:58 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:18:32.383 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:18:32.383 17:34:58 -- host/discovery.sh@13 -- # exit 0 00:18:32.383 00:18:32.383 real 0m0.071s 00:18:32.383 user 0m0.034s 00:18:32.383 sys 0m0.042s 00:18:32.384 17:34:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:32.384 17:34:58 -- common/autotest_common.sh@10 -- # set +x 00:18:32.384 ************************************ 00:18:32.384 END TEST nvmf_discovery 00:18:32.384 ************************************ 00:18:32.384 17:34:58 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:18:32.384 17:34:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:32.384 17:34:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:32.384 17:34:58 -- common/autotest_common.sh@10 -- # set +x 00:18:32.384 ************************************ 00:18:32.384 START TEST nvmf_discovery_remove_ifc 00:18:32.384 ************************************ 00:18:32.384 17:34:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:18:32.384 * Looking for test storage... 00:18:32.384 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:32.384 17:34:58 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.384 17:34:58 -- nvmf/common.sh@7 -- # uname -s 00:18:32.384 17:34:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.384 17:34:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.384 17:34:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.384 17:34:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.384 17:34:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.384 17:34:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.384 17:34:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.384 17:34:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.384 17:34:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.384 17:34:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.384 17:34:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.384 17:34:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.384 17:34:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.384 17:34:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.384 17:34:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.384 17:34:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.384 17:34:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:32.384 17:34:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.384 17:34:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.384 17:34:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.384 17:34:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.384 17:34:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.384 17:34:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.384 17:34:58 -- paths/export.sh@5 -- # export PATH 00:18:32.384 17:34:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.384 17:34:58 -- nvmf/common.sh@47 -- # : 0 00:18:32.384 17:34:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:32.384 17:34:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:32.384 17:34:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.384 17:34:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.384 17:34:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.384 17:34:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:32.384 17:34:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:32.384 17:34:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:32.384 17:34:58 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:18:32.384 17:34:58 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:18:32.384 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:18:32.384 17:34:58 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:18:32.384 00:18:32.384 real 0m0.056s 00:18:32.384 user 0m0.023s 00:18:32.384 sys 0m0.037s 00:18:32.384 17:34:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:32.384 17:34:58 -- common/autotest_common.sh@10 -- # set +x 00:18:32.384 ************************************ 00:18:32.384 END TEST nvmf_discovery_remove_ifc 00:18:32.384 ************************************ 00:18:32.384 17:34:58 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:18:32.384 17:34:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:32.384 17:34:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:32.384 17:34:58 -- common/autotest_common.sh@10 -- # set +x 00:18:32.642 ************************************ 00:18:32.642 START TEST nvmf_identify_kernel_target 00:18:32.642 ************************************ 00:18:32.642 17:34:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:18:32.642 * Looking for test storage... 00:18:32.642 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:32.642 17:34:59 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.642 17:34:59 -- nvmf/common.sh@7 -- # uname -s 00:18:32.642 17:34:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.642 17:34:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.642 17:34:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.642 17:34:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.642 17:34:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.642 17:34:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.642 17:34:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.642 17:34:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.642 17:34:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.642 17:34:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.642 17:34:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.642 17:34:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.642 17:34:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.642 17:34:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.642 17:34:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.642 17:34:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.642 17:34:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:32.642 17:34:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.642 17:34:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.642 17:34:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.642 17:34:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.642 17:34:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.642 17:34:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.642 17:34:59 -- paths/export.sh@5 -- # export PATH 00:18:32.642 17:34:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.642 17:34:59 -- nvmf/common.sh@47 -- # : 0 00:18:32.642 17:34:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:32.642 17:34:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:32.642 17:34:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.642 17:34:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.642 17:34:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.642 17:34:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:32.642 17:34:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:32.642 17:34:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:32.642 17:34:59 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:32.642 17:34:59 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:32.642 17:34:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.642 17:34:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:32.642 17:34:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:32.642 17:34:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:32.642 17:34:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.643 17:34:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:32.643 17:34:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.643 17:34:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:32.643 17:34:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:32.643 17:34:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:32.643 17:34:59 -- common/autotest_common.sh@10 -- # set +x 00:18:34.551 17:35:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:34.551 17:35:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:34.552 17:35:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:34.552 17:35:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:34.552 17:35:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:34.552 17:35:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:34.552 17:35:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:34.552 17:35:00 -- nvmf/common.sh@295 -- # net_devs=() 00:18:34.552 17:35:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:34.552 17:35:00 -- nvmf/common.sh@296 -- # e810=() 00:18:34.552 17:35:00 -- nvmf/common.sh@296 -- # local -ga e810 00:18:34.552 17:35:00 -- nvmf/common.sh@297 -- # x722=() 00:18:34.552 17:35:00 -- nvmf/common.sh@297 -- # local -ga x722 00:18:34.552 17:35:00 -- nvmf/common.sh@298 -- # mlx=() 00:18:34.552 17:35:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:34.552 17:35:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:34.552 17:35:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:34.552 17:35:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:34.552 17:35:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:34.552 17:35:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:34.552 17:35:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:34.552 17:35:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:34.552 17:35:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:34.552 17:35:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:34.552 17:35:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:34.552 17:35:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:34.552 17:35:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:34.552 17:35:00 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:34.552 17:35:00 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:34.552 17:35:00 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:34.552 17:35:00 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:34.552 17:35:00 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:34.552 17:35:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:34.552 17:35:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:34.552 17:35:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:18:34.552 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:18:34.552 17:35:00 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:34.552 17:35:00 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:34.552 17:35:00 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:18:34.552 17:35:00 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:34.552 17:35:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:34.552 17:35:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:18:34.552 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:18:34.552 17:35:00 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:34.552 17:35:00 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:34.552 17:35:00 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:18:34.552 17:35:00 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:34.552 17:35:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:34.552 17:35:00 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:34.552 17:35:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:34.552 17:35:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.552 17:35:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:34.552 17:35:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.552 17:35:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:18:34.552 Found net devices under 0000:09:00.0: mlx_0_0 00:18:34.552 17:35:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.552 17:35:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:34.552 17:35:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.552 17:35:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:34.552 17:35:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.552 17:35:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:18:34.552 Found net devices under 0000:09:00.1: mlx_0_1 00:18:34.552 17:35:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.552 17:35:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:34.552 17:35:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:34.552 17:35:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:34.552 17:35:00 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:34.552 17:35:00 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:34.552 17:35:00 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:34.552 17:35:00 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:34.552 17:35:00 -- nvmf/common.sh@58 -- # uname 00:18:34.552 17:35:00 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:34.552 17:35:00 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:34.552 17:35:00 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:34.552 17:35:00 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:34.552 17:35:00 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:34.552 17:35:00 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:34.552 17:35:00 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:34.552 17:35:00 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:34.552 17:35:00 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:34.552 17:35:00 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:34.552 17:35:00 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:34.552 17:35:00 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:34.552 17:35:00 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:34.552 17:35:00 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:34.552 17:35:00 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:34.552 17:35:00 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:34.552 17:35:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:34.552 17:35:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.552 17:35:00 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:34.552 17:35:00 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:34.552 17:35:00 -- nvmf/common.sh@105 -- # continue 2 00:18:34.552 17:35:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:34.552 17:35:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.552 17:35:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:34.552 17:35:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.552 17:35:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:34.552 17:35:00 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:34.552 17:35:00 -- nvmf/common.sh@105 -- # continue 2 00:18:34.552 17:35:00 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:34.552 17:35:00 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:34.552 17:35:00 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:34.552 17:35:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:34.552 17:35:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:34.552 17:35:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:34.552 17:35:00 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:34.552 17:35:00 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:34.552 17:35:00 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:34.552 82: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:34.552 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:18:34.552 altname enp9s0f0np0 00:18:34.552 inet 192.168.100.8/24 scope global mlx_0_0 00:18:34.552 valid_lft forever preferred_lft forever 00:18:34.552 17:35:00 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:34.553 17:35:00 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:34.553 17:35:00 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:34.553 17:35:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:34.553 17:35:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:34.553 17:35:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:34.553 17:35:00 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:34.553 17:35:00 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:34.553 17:35:00 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:34.553 83: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:34.553 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:18:34.553 altname enp9s0f1np1 00:18:34.553 inet 192.168.100.9/24 scope global mlx_0_1 00:18:34.553 valid_lft forever preferred_lft forever 00:18:34.553 17:35:00 -- nvmf/common.sh@411 -- # return 0 00:18:34.553 17:35:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:34.553 17:35:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:34.553 17:35:00 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:34.553 17:35:00 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:34.553 17:35:00 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:34.553 17:35:00 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:34.553 17:35:00 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:34.553 17:35:00 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:34.553 17:35:00 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:34.553 17:35:00 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:34.553 17:35:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:34.553 17:35:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.553 17:35:00 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:34.553 17:35:00 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:34.553 17:35:00 -- nvmf/common.sh@105 -- # continue 2 00:18:34.553 17:35:00 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:34.553 17:35:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.553 17:35:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:34.553 17:35:00 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:34.553 17:35:00 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:34.553 17:35:00 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:34.553 17:35:00 -- nvmf/common.sh@105 -- # continue 2 00:18:34.553 17:35:00 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:34.553 17:35:00 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:34.553 17:35:00 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:34.553 17:35:00 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:34.553 17:35:00 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:34.553 17:35:00 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:34.553 17:35:00 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:34.553 17:35:00 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:34.553 17:35:00 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:34.553 17:35:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:34.553 17:35:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:34.553 17:35:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:34.553 17:35:01 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:34.553 192.168.100.9' 00:18:34.553 17:35:01 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:34.553 192.168.100.9' 00:18:34.553 17:35:01 -- nvmf/common.sh@446 -- # head -n 1 00:18:34.553 17:35:01 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:34.553 17:35:01 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:34.553 192.168.100.9' 00:18:34.553 17:35:01 -- nvmf/common.sh@447 -- # tail -n +2 00:18:34.553 17:35:01 -- nvmf/common.sh@447 -- # head -n 1 00:18:34.553 17:35:01 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:34.553 17:35:01 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:34.553 17:35:01 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:34.553 17:35:01 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:34.553 17:35:01 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:34.553 17:35:01 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:34.553 17:35:01 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:34.553 17:35:01 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:34.553 17:35:01 -- nvmf/common.sh@717 -- # local ip 00:18:34.553 17:35:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:34.553 17:35:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:34.553 17:35:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.553 17:35:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.553 17:35:01 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:34.553 17:35:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:34.553 17:35:01 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:34.553 17:35:01 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:34.553 17:35:01 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:34.553 17:35:01 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:18:34.553 17:35:01 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:18:34.553 17:35:01 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:18:34.553 17:35:01 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:18:34.553 17:35:01 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:34.553 17:35:01 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:34.553 17:35:01 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:34.553 17:35:01 -- nvmf/common.sh@628 -- # local block nvme 00:18:34.553 17:35:01 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:18:34.553 17:35:01 -- nvmf/common.sh@631 -- # modprobe nvmet 00:18:34.553 17:35:01 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:34.553 17:35:01 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:18:35.933 Waiting for block devices as requested 00:18:35.933 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:18:35.933 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:35.933 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:35.933 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:35.933 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:35.933 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:36.191 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:36.191 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:36.191 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:36.191 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:36.191 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:36.450 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:36.451 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:36.451 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:36.451 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:36.709 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:36.709 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:36.709 17:35:03 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:18:36.709 17:35:03 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:36.709 17:35:03 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:18:36.709 17:35:03 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:18:36.709 17:35:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:36.709 17:35:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:36.709 17:35:03 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:18:36.709 17:35:03 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:36.709 17:35:03 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:18:36.969 No valid GPT data, bailing 00:18:36.969 17:35:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:36.969 17:35:03 -- scripts/common.sh@391 -- # pt= 00:18:36.969 17:35:03 -- scripts/common.sh@392 -- # return 1 00:18:36.969 17:35:03 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:18:36.969 17:35:03 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:18:36.969 17:35:03 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:36.969 17:35:03 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:36.969 17:35:03 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:36.969 17:35:03 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:36.969 17:35:03 -- nvmf/common.sh@656 -- # echo 1 00:18:36.969 17:35:03 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:18:36.969 17:35:03 -- nvmf/common.sh@658 -- # echo 1 00:18:36.969 17:35:03 -- nvmf/common.sh@660 -- # echo 192.168.100.8 00:18:36.969 17:35:03 -- nvmf/common.sh@661 -- # echo rdma 00:18:36.969 17:35:03 -- nvmf/common.sh@662 -- # echo 4420 00:18:36.969 17:35:03 -- nvmf/common.sh@663 -- # echo ipv4 00:18:36.969 17:35:03 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:36.969 17:35:03 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 192.168.100.8 -t rdma -s 4420 00:18:36.969 00:18:36.969 Discovery Log Number of Records 2, Generation counter 2 00:18:36.969 =====Discovery Log Entry 0====== 00:18:36.969 trtype: rdma 00:18:36.969 adrfam: ipv4 00:18:36.969 subtype: current discovery subsystem 00:18:36.969 treq: not specified, sq flow control disable supported 00:18:36.969 portid: 1 00:18:36.969 trsvcid: 4420 00:18:36.969 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:36.969 traddr: 192.168.100.8 00:18:36.969 eflags: none 00:18:36.969 rdma_prtype: not specified 00:18:36.969 rdma_qptype: connected 00:18:36.969 rdma_cms: rdma-cm 00:18:36.969 rdma_pkey: 0x0000 00:18:36.969 =====Discovery Log Entry 1====== 00:18:36.969 trtype: rdma 00:18:36.969 adrfam: ipv4 00:18:36.969 subtype: nvme subsystem 00:18:36.969 treq: not specified, sq flow control disable supported 00:18:36.969 portid: 1 00:18:36.969 trsvcid: 4420 00:18:36.969 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:36.969 traddr: 192.168.100.8 00:18:36.969 eflags: none 00:18:36.969 rdma_prtype: not specified 00:18:36.969 rdma_qptype: connected 00:18:36.969 rdma_cms: rdma-cm 00:18:36.969 rdma_pkey: 0x0000 00:18:36.969 17:35:03 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:18:36.969 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:36.969 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.969 ===================================================== 00:18:36.969 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:36.969 ===================================================== 00:18:36.969 Controller Capabilities/Features 00:18:36.969 ================================ 00:18:36.969 Vendor ID: 0000 00:18:36.970 Subsystem Vendor ID: 0000 00:18:36.970 Serial Number: af61e0c09ccb1612f027 00:18:36.970 Model Number: Linux 00:18:36.970 Firmware Version: 6.7.0-68 00:18:36.970 Recommended Arb Burst: 0 00:18:36.970 IEEE OUI Identifier: 00 00 00 00:18:36.970 Multi-path I/O 00:18:36.970 May have multiple subsystem ports: No 00:18:36.970 May have multiple controllers: No 00:18:36.970 Associated with SR-IOV VF: No 00:18:36.970 Max Data Transfer Size: Unlimited 00:18:36.970 Max Number of Namespaces: 0 00:18:36.970 Max Number of I/O Queues: 1024 00:18:36.970 NVMe Specification Version (VS): 1.3 00:18:36.970 NVMe Specification Version (Identify): 1.3 00:18:36.970 Maximum Queue Entries: 128 00:18:36.970 Contiguous Queues Required: No 00:18:36.970 Arbitration Mechanisms Supported 00:18:36.970 Weighted Round Robin: Not Supported 00:18:36.970 Vendor Specific: Not Supported 00:18:36.970 Reset Timeout: 7500 ms 00:18:36.970 Doorbell Stride: 4 bytes 00:18:36.970 NVM Subsystem Reset: Not Supported 00:18:36.970 Command Sets Supported 00:18:36.970 NVM Command Set: Supported 00:18:36.970 Boot Partition: Not Supported 00:18:36.970 Memory Page Size Minimum: 4096 bytes 00:18:36.970 Memory Page Size Maximum: 4096 bytes 00:18:36.970 Persistent Memory Region: Not Supported 00:18:36.970 Optional Asynchronous Events Supported 00:18:36.970 Namespace Attribute Notices: Not Supported 00:18:36.970 Firmware Activation Notices: Not Supported 00:18:36.970 ANA Change Notices: Not Supported 00:18:36.970 PLE Aggregate Log Change Notices: Not Supported 00:18:36.970 LBA Status Info Alert Notices: Not Supported 00:18:36.970 EGE Aggregate Log Change Notices: Not Supported 00:18:36.970 Normal NVM Subsystem Shutdown event: Not Supported 00:18:36.970 Zone Descriptor Change Notices: Not Supported 00:18:36.970 Discovery Log Change Notices: Supported 00:18:36.970 Controller Attributes 00:18:36.970 128-bit Host Identifier: Not Supported 00:18:36.970 Non-Operational Permissive Mode: Not Supported 00:18:36.970 NVM Sets: Not Supported 00:18:36.970 Read Recovery Levels: Not Supported 00:18:36.970 Endurance Groups: Not Supported 00:18:36.970 Predictable Latency Mode: Not Supported 00:18:36.970 Traffic Based Keep ALive: Not Supported 00:18:36.970 Namespace Granularity: Not Supported 00:18:36.970 SQ Associations: Not Supported 00:18:36.970 UUID List: Not Supported 00:18:36.970 Multi-Domain Subsystem: Not Supported 00:18:36.970 Fixed Capacity Management: Not Supported 00:18:36.970 Variable Capacity Management: Not Supported 00:18:36.970 Delete Endurance Group: Not Supported 00:18:36.970 Delete NVM Set: Not Supported 00:18:36.970 Extended LBA Formats Supported: Not Supported 00:18:36.970 Flexible Data Placement Supported: Not Supported 00:18:36.970 00:18:36.970 Controller Memory Buffer Support 00:18:36.970 ================================ 00:18:36.970 Supported: No 00:18:36.970 00:18:36.970 Persistent Memory Region Support 00:18:36.970 ================================ 00:18:36.970 Supported: No 00:18:36.970 00:18:36.970 Admin Command Set Attributes 00:18:36.970 ============================ 00:18:36.970 Security Send/Receive: Not Supported 00:18:36.970 Format NVM: Not Supported 00:18:36.970 Firmware Activate/Download: Not Supported 00:18:36.970 Namespace Management: Not Supported 00:18:36.970 Device Self-Test: Not Supported 00:18:36.970 Directives: Not Supported 00:18:36.970 NVMe-MI: Not Supported 00:18:36.970 Virtualization Management: Not Supported 00:18:36.970 Doorbell Buffer Config: Not Supported 00:18:36.970 Get LBA Status Capability: Not Supported 00:18:36.970 Command & Feature Lockdown Capability: Not Supported 00:18:36.970 Abort Command Limit: 1 00:18:36.970 Async Event Request Limit: 1 00:18:36.970 Number of Firmware Slots: N/A 00:18:36.970 Firmware Slot 1 Read-Only: N/A 00:18:36.970 Firmware Activation Without Reset: N/A 00:18:36.970 Multiple Update Detection Support: N/A 00:18:36.970 Firmware Update Granularity: No Information Provided 00:18:36.970 Per-Namespace SMART Log: No 00:18:36.970 Asymmetric Namespace Access Log Page: Not Supported 00:18:36.970 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:36.970 Command Effects Log Page: Not Supported 00:18:36.970 Get Log Page Extended Data: Supported 00:18:36.970 Telemetry Log Pages: Not Supported 00:18:36.970 Persistent Event Log Pages: Not Supported 00:18:36.970 Supported Log Pages Log Page: May Support 00:18:36.970 Commands Supported & Effects Log Page: Not Supported 00:18:36.970 Feature Identifiers & Effects Log Page:May Support 00:18:36.970 NVMe-MI Commands & Effects Log Page: May Support 00:18:36.970 Data Area 4 for Telemetry Log: Not Supported 00:18:36.970 Error Log Page Entries Supported: 1 00:18:36.970 Keep Alive: Not Supported 00:18:36.970 00:18:36.970 NVM Command Set Attributes 00:18:36.970 ========================== 00:18:36.970 Submission Queue Entry Size 00:18:36.970 Max: 1 00:18:36.970 Min: 1 00:18:36.970 Completion Queue Entry Size 00:18:36.970 Max: 1 00:18:36.970 Min: 1 00:18:36.970 Number of Namespaces: 0 00:18:36.970 Compare Command: Not Supported 00:18:36.970 Write Uncorrectable Command: Not Supported 00:18:36.970 Dataset Management Command: Not Supported 00:18:36.970 Write Zeroes Command: Not Supported 00:18:36.970 Set Features Save Field: Not Supported 00:18:36.970 Reservations: Not Supported 00:18:36.970 Timestamp: Not Supported 00:18:36.970 Copy: Not Supported 00:18:36.970 Volatile Write Cache: Not Present 00:18:36.970 Atomic Write Unit (Normal): 1 00:18:36.970 Atomic Write Unit (PFail): 1 00:18:36.970 Atomic Compare & Write Unit: 1 00:18:36.970 Fused Compare & Write: Not Supported 00:18:36.970 Scatter-Gather List 00:18:36.970 SGL Command Set: Supported 00:18:36.970 SGL Keyed: Supported 00:18:36.970 SGL Bit Bucket Descriptor: Not Supported 00:18:36.970 SGL Metadata Pointer: Not Supported 00:18:36.970 Oversized SGL: Not Supported 00:18:36.970 SGL Metadata Address: Not Supported 00:18:36.970 SGL Offset: Supported 00:18:36.970 Transport SGL Data Block: Not Supported 00:18:36.970 Replay Protected Memory Block: Not Supported 00:18:36.970 00:18:36.970 Firmware Slot Information 00:18:36.970 ========================= 00:18:36.970 Active slot: 0 00:18:36.970 00:18:36.970 00:18:36.970 Error Log 00:18:36.970 ========= 00:18:36.970 00:18:36.970 Active Namespaces 00:18:36.970 ================= 00:18:36.970 Discovery Log Page 00:18:36.970 ================== 00:18:36.970 Generation Counter: 2 00:18:36.970 Number of Records: 2 00:18:36.970 Record Format: 0 00:18:36.970 00:18:36.970 Discovery Log Entry 0 00:18:36.970 ---------------------- 00:18:36.970 Transport Type: 1 (RDMA) 00:18:36.970 Address Family: 1 (IPv4) 00:18:36.970 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:36.970 Entry Flags: 00:18:36.970 Duplicate Returned Information: 0 00:18:36.970 Explicit Persistent Connection Support for Discovery: 0 00:18:36.970 Transport Requirements: 00:18:36.970 Secure Channel: Not Specified 00:18:36.970 Port ID: 1 (0x0001) 00:18:36.970 Controller ID: 65535 (0xffff) 00:18:36.970 Admin Max SQ Size: 32 00:18:36.970 Transport Service Identifier: 4420 00:18:36.970 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:36.970 Transport Address: 192.168.100.8 00:18:36.970 Transport Specific Address Subtype - RDMA 00:18:36.970 RDMA QP Service Type: 1 (Reliable Connected) 00:18:36.970 RDMA Provider Type: 1 (No provider specified) 00:18:36.970 RDMA CM Service: 1 (RDMA_CM) 00:18:36.970 Discovery Log Entry 1 00:18:36.970 ---------------------- 00:18:36.970 Transport Type: 1 (RDMA) 00:18:36.970 Address Family: 1 (IPv4) 00:18:36.970 Subsystem Type: 2 (NVM Subsystem) 00:18:36.970 Entry Flags: 00:18:36.970 Duplicate Returned Information: 0 00:18:36.970 Explicit Persistent Connection Support for Discovery: 0 00:18:36.970 Transport Requirements: 00:18:36.970 Secure Channel: Not Specified 00:18:36.970 Port ID: 1 (0x0001) 00:18:36.970 Controller ID: 65535 (0xffff) 00:18:36.970 Admin Max SQ Size: 32 00:18:36.970 Transport Service Identifier: 4420 00:18:36.970 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:36.970 Transport Address: 192.168.100.8 00:18:36.970 Transport Specific Address Subtype - RDMA 00:18:36.970 RDMA QP Service Type: 1 (Reliable Connected) 00:18:37.230 RDMA Provider Type: 1 (No provider specified) 00:18:37.230 RDMA CM Service: 1 (RDMA_CM) 00:18:37.230 17:35:03 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:37.230 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.230 get_feature(0x01) failed 00:18:37.230 get_feature(0x02) failed 00:18:37.230 get_feature(0x04) failed 00:18:37.230 ===================================================== 00:18:37.230 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:18:37.230 ===================================================== 00:18:37.230 Controller Capabilities/Features 00:18:37.230 ================================ 00:18:37.230 Vendor ID: 0000 00:18:37.230 Subsystem Vendor ID: 0000 00:18:37.230 Serial Number: 25070ea6669e98efa2e9 00:18:37.230 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:37.230 Firmware Version: 6.7.0-68 00:18:37.230 Recommended Arb Burst: 6 00:18:37.230 IEEE OUI Identifier: 00 00 00 00:18:37.230 Multi-path I/O 00:18:37.230 May have multiple subsystem ports: Yes 00:18:37.230 May have multiple controllers: Yes 00:18:37.230 Associated with SR-IOV VF: No 00:18:37.230 Max Data Transfer Size: 1048576 00:18:37.230 Max Number of Namespaces: 1024 00:18:37.230 Max Number of I/O Queues: 128 00:18:37.230 NVMe Specification Version (VS): 1.3 00:18:37.230 NVMe Specification Version (Identify): 1.3 00:18:37.230 Maximum Queue Entries: 128 00:18:37.230 Contiguous Queues Required: No 00:18:37.230 Arbitration Mechanisms Supported 00:18:37.230 Weighted Round Robin: Not Supported 00:18:37.230 Vendor Specific: Not Supported 00:18:37.230 Reset Timeout: 7500 ms 00:18:37.230 Doorbell Stride: 4 bytes 00:18:37.230 NVM Subsystem Reset: Not Supported 00:18:37.230 Command Sets Supported 00:18:37.230 NVM Command Set: Supported 00:18:37.230 Boot Partition: Not Supported 00:18:37.230 Memory Page Size Minimum: 4096 bytes 00:18:37.230 Memory Page Size Maximum: 4096 bytes 00:18:37.230 Persistent Memory Region: Not Supported 00:18:37.230 Optional Asynchronous Events Supported 00:18:37.230 Namespace Attribute Notices: Supported 00:18:37.230 Firmware Activation Notices: Not Supported 00:18:37.230 ANA Change Notices: Supported 00:18:37.230 PLE Aggregate Log Change Notices: Not Supported 00:18:37.230 LBA Status Info Alert Notices: Not Supported 00:18:37.230 EGE Aggregate Log Change Notices: Not Supported 00:18:37.230 Normal NVM Subsystem Shutdown event: Not Supported 00:18:37.230 Zone Descriptor Change Notices: Not Supported 00:18:37.230 Discovery Log Change Notices: Not Supported 00:18:37.230 Controller Attributes 00:18:37.230 128-bit Host Identifier: Supported 00:18:37.230 Non-Operational Permissive Mode: Not Supported 00:18:37.230 NVM Sets: Not Supported 00:18:37.230 Read Recovery Levels: Not Supported 00:18:37.230 Endurance Groups: Not Supported 00:18:37.230 Predictable Latency Mode: Not Supported 00:18:37.230 Traffic Based Keep ALive: Supported 00:18:37.230 Namespace Granularity: Not Supported 00:18:37.230 SQ Associations: Not Supported 00:18:37.230 UUID List: Not Supported 00:18:37.230 Multi-Domain Subsystem: Not Supported 00:18:37.230 Fixed Capacity Management: Not Supported 00:18:37.230 Variable Capacity Management: Not Supported 00:18:37.230 Delete Endurance Group: Not Supported 00:18:37.230 Delete NVM Set: Not Supported 00:18:37.230 Extended LBA Formats Supported: Not Supported 00:18:37.230 Flexible Data Placement Supported: Not Supported 00:18:37.230 00:18:37.230 Controller Memory Buffer Support 00:18:37.230 ================================ 00:18:37.230 Supported: No 00:18:37.230 00:18:37.230 Persistent Memory Region Support 00:18:37.230 ================================ 00:18:37.230 Supported: No 00:18:37.230 00:18:37.230 Admin Command Set Attributes 00:18:37.230 ============================ 00:18:37.230 Security Send/Receive: Not Supported 00:18:37.230 Format NVM: Not Supported 00:18:37.230 Firmware Activate/Download: Not Supported 00:18:37.230 Namespace Management: Not Supported 00:18:37.230 Device Self-Test: Not Supported 00:18:37.230 Directives: Not Supported 00:18:37.230 NVMe-MI: Not Supported 00:18:37.230 Virtualization Management: Not Supported 00:18:37.230 Doorbell Buffer Config: Not Supported 00:18:37.230 Get LBA Status Capability: Not Supported 00:18:37.230 Command & Feature Lockdown Capability: Not Supported 00:18:37.230 Abort Command Limit: 4 00:18:37.230 Async Event Request Limit: 4 00:18:37.230 Number of Firmware Slots: N/A 00:18:37.230 Firmware Slot 1 Read-Only: N/A 00:18:37.230 Firmware Activation Without Reset: N/A 00:18:37.230 Multiple Update Detection Support: N/A 00:18:37.230 Firmware Update Granularity: No Information Provided 00:18:37.230 Per-Namespace SMART Log: Yes 00:18:37.230 Asymmetric Namespace Access Log Page: Supported 00:18:37.230 ANA Transition Time : 10 sec 00:18:37.230 00:18:37.230 Asymmetric Namespace Access Capabilities 00:18:37.230 ANA Optimized State : Supported 00:18:37.230 ANA Non-Optimized State : Supported 00:18:37.230 ANA Inaccessible State : Supported 00:18:37.230 ANA Persistent Loss State : Supported 00:18:37.230 ANA Change State : Supported 00:18:37.230 ANAGRPID is not changed : No 00:18:37.230 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:37.230 00:18:37.230 ANA Group Identifier Maximum : 128 00:18:37.230 Number of ANA Group Identifiers : 128 00:18:37.230 Max Number of Allowed Namespaces : 1024 00:18:37.230 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:37.230 Command Effects Log Page: Supported 00:18:37.230 Get Log Page Extended Data: Supported 00:18:37.230 Telemetry Log Pages: Not Supported 00:18:37.230 Persistent Event Log Pages: Not Supported 00:18:37.230 Supported Log Pages Log Page: May Support 00:18:37.230 Commands Supported & Effects Log Page: Not Supported 00:18:37.230 Feature Identifiers & Effects Log Page:May Support 00:18:37.230 NVMe-MI Commands & Effects Log Page: May Support 00:18:37.230 Data Area 4 for Telemetry Log: Not Supported 00:18:37.230 Error Log Page Entries Supported: 128 00:18:37.230 Keep Alive: Supported 00:18:37.230 Keep Alive Granularity: 1000 ms 00:18:37.230 00:18:37.230 NVM Command Set Attributes 00:18:37.231 ========================== 00:18:37.231 Submission Queue Entry Size 00:18:37.231 Max: 64 00:18:37.231 Min: 64 00:18:37.231 Completion Queue Entry Size 00:18:37.231 Max: 16 00:18:37.231 Min: 16 00:18:37.231 Number of Namespaces: 1024 00:18:37.231 Compare Command: Not Supported 00:18:37.231 Write Uncorrectable Command: Not Supported 00:18:37.231 Dataset Management Command: Supported 00:18:37.231 Write Zeroes Command: Supported 00:18:37.231 Set Features Save Field: Not Supported 00:18:37.231 Reservations: Not Supported 00:18:37.231 Timestamp: Not Supported 00:18:37.231 Copy: Not Supported 00:18:37.231 Volatile Write Cache: Present 00:18:37.231 Atomic Write Unit (Normal): 1 00:18:37.231 Atomic Write Unit (PFail): 1 00:18:37.231 Atomic Compare & Write Unit: 1 00:18:37.231 Fused Compare & Write: Not Supported 00:18:37.231 Scatter-Gather List 00:18:37.231 SGL Command Set: Supported 00:18:37.231 SGL Keyed: Supported 00:18:37.231 SGL Bit Bucket Descriptor: Not Supported 00:18:37.231 SGL Metadata Pointer: Not Supported 00:18:37.231 Oversized SGL: Not Supported 00:18:37.231 SGL Metadata Address: Not Supported 00:18:37.231 SGL Offset: Supported 00:18:37.231 Transport SGL Data Block: Not Supported 00:18:37.231 Replay Protected Memory Block: Not Supported 00:18:37.231 00:18:37.231 Firmware Slot Information 00:18:37.231 ========================= 00:18:37.231 Active slot: 0 00:18:37.231 00:18:37.231 Asymmetric Namespace Access 00:18:37.231 =========================== 00:18:37.231 Change Count : 0 00:18:37.231 Number of ANA Group Descriptors : 1 00:18:37.231 ANA Group Descriptor : 0 00:18:37.231 ANA Group ID : 1 00:18:37.231 Number of NSID Values : 1 00:18:37.231 Change Count : 0 00:18:37.231 ANA State : 1 00:18:37.231 Namespace Identifier : 1 00:18:37.231 00:18:37.231 Commands Supported and Effects 00:18:37.231 ============================== 00:18:37.231 Admin Commands 00:18:37.231 -------------- 00:18:37.231 Get Log Page (02h): Supported 00:18:37.231 Identify (06h): Supported 00:18:37.231 Abort (08h): Supported 00:18:37.231 Set Features (09h): Supported 00:18:37.231 Get Features (0Ah): Supported 00:18:37.231 Asynchronous Event Request (0Ch): Supported 00:18:37.231 Keep Alive (18h): Supported 00:18:37.231 I/O Commands 00:18:37.231 ------------ 00:18:37.231 Flush (00h): Supported 00:18:37.231 Write (01h): Supported LBA-Change 00:18:37.231 Read (02h): Supported 00:18:37.231 Write Zeroes (08h): Supported LBA-Change 00:18:37.231 Dataset Management (09h): Supported 00:18:37.231 00:18:37.231 Error Log 00:18:37.231 ========= 00:18:37.231 Entry: 0 00:18:37.231 Error Count: 0x3 00:18:37.231 Submission Queue Id: 0x0 00:18:37.231 Command Id: 0x5 00:18:37.231 Phase Bit: 0 00:18:37.231 Status Code: 0x2 00:18:37.231 Status Code Type: 0x0 00:18:37.231 Do Not Retry: 1 00:18:37.231 Error Location: 0x28 00:18:37.231 LBA: 0x0 00:18:37.231 Namespace: 0x0 00:18:37.231 Vendor Log Page: 0x0 00:18:37.231 ----------- 00:18:37.231 Entry: 1 00:18:37.231 Error Count: 0x2 00:18:37.231 Submission Queue Id: 0x0 00:18:37.231 Command Id: 0x5 00:18:37.231 Phase Bit: 0 00:18:37.231 Status Code: 0x2 00:18:37.231 Status Code Type: 0x0 00:18:37.231 Do Not Retry: 1 00:18:37.231 Error Location: 0x28 00:18:37.231 LBA: 0x0 00:18:37.231 Namespace: 0x0 00:18:37.231 Vendor Log Page: 0x0 00:18:37.231 ----------- 00:18:37.231 Entry: 2 00:18:37.231 Error Count: 0x1 00:18:37.231 Submission Queue Id: 0x0 00:18:37.231 Command Id: 0x0 00:18:37.231 Phase Bit: 0 00:18:37.231 Status Code: 0x2 00:18:37.231 Status Code Type: 0x0 00:18:37.231 Do Not Retry: 1 00:18:37.231 Error Location: 0x28 00:18:37.231 LBA: 0x0 00:18:37.231 Namespace: 0x0 00:18:37.231 Vendor Log Page: 0x0 00:18:37.231 00:18:37.231 Number of Queues 00:18:37.231 ================ 00:18:37.231 Number of I/O Submission Queues: 128 00:18:37.231 Number of I/O Completion Queues: 128 00:18:37.231 00:18:37.231 ZNS Specific Controller Data 00:18:37.231 ============================ 00:18:37.231 Zone Append Size Limit: 0 00:18:37.231 00:18:37.231 00:18:37.231 Active Namespaces 00:18:37.231 ================= 00:18:37.231 get_feature(0x05) failed 00:18:37.231 Namespace ID:1 00:18:37.231 Command Set Identifier: NVM (00h) 00:18:37.231 Deallocate: Supported 00:18:37.231 Deallocated/Unwritten Error: Not Supported 00:18:37.231 Deallocated Read Value: Unknown 00:18:37.231 Deallocate in Write Zeroes: Not Supported 00:18:37.231 Deallocated Guard Field: 0xFFFF 00:18:37.231 Flush: Supported 00:18:37.231 Reservation: Not Supported 00:18:37.231 Namespace Sharing Capabilities: Multiple Controllers 00:18:37.231 Size (in LBAs): 1953525168 (931GiB) 00:18:37.231 Capacity (in LBAs): 1953525168 (931GiB) 00:18:37.231 Utilization (in LBAs): 1953525168 (931GiB) 00:18:37.231 UUID: 82376ca6-4f44-4a13-95ba-beb8a6056054 00:18:37.231 Thin Provisioning: Not Supported 00:18:37.231 Per-NS Atomic Units: Yes 00:18:37.231 Atomic Boundary Size (Normal): 0 00:18:37.231 Atomic Boundary Size (PFail): 0 00:18:37.231 Atomic Boundary Offset: 0 00:18:37.231 NGUID/EUI64 Never Reused: No 00:18:37.231 ANA group ID: 1 00:18:37.231 Namespace Write Protected: No 00:18:37.231 Number of LBA Formats: 1 00:18:37.231 Current LBA Format: LBA Format #00 00:18:37.231 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:37.231 00:18:37.231 17:35:03 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:37.231 17:35:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:37.231 17:35:03 -- nvmf/common.sh@117 -- # sync 00:18:37.231 17:35:03 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:37.231 17:35:03 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:37.231 17:35:03 -- nvmf/common.sh@120 -- # set +e 00:18:37.231 17:35:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:37.231 17:35:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:37.231 rmmod nvme_rdma 00:18:37.231 rmmod nvme_fabrics 00:18:37.231 17:35:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:37.231 17:35:03 -- nvmf/common.sh@124 -- # set -e 00:18:37.231 17:35:03 -- nvmf/common.sh@125 -- # return 0 00:18:37.231 17:35:03 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:37.231 17:35:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:37.231 17:35:03 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:37.231 17:35:03 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:37.231 17:35:03 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:37.231 17:35:03 -- nvmf/common.sh@675 -- # echo 0 00:18:37.231 17:35:03 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:37.231 17:35:03 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:37.231 17:35:03 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:37.231 17:35:03 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:37.231 17:35:03 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:18:37.231 17:35:03 -- nvmf/common.sh@684 -- # modprobe -r nvmet_rdma nvmet 00:18:37.231 17:35:03 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:18:38.608 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:18:38.608 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:18:38.608 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:18:38.608 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:18:38.608 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:18:38.608 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:18:38.608 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:18:38.608 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:18:38.608 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:18:38.608 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:18:38.608 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:18:38.608 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:18:38.608 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:18:38.608 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:18:38.608 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:18:38.608 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:18:39.544 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:18:39.544 00:18:39.544 real 0m7.008s 00:18:39.544 user 0m1.976s 00:18:39.544 sys 0m3.101s 00:18:39.544 17:35:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:39.544 17:35:06 -- common/autotest_common.sh@10 -- # set +x 00:18:39.544 ************************************ 00:18:39.544 END TEST nvmf_identify_kernel_target 00:18:39.544 ************************************ 00:18:39.544 17:35:06 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:18:39.544 17:35:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:39.544 17:35:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:39.544 17:35:06 -- common/autotest_common.sh@10 -- # set +x 00:18:39.833 ************************************ 00:18:39.833 START TEST nvmf_auth 00:18:39.833 ************************************ 00:18:39.833 17:35:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:18:39.833 * Looking for test storage... 00:18:39.833 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:39.833 17:35:06 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:39.833 17:35:06 -- nvmf/common.sh@7 -- # uname -s 00:18:39.833 17:35:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.833 17:35:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.833 17:35:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.833 17:35:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.833 17:35:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.833 17:35:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.833 17:35:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.833 17:35:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.833 17:35:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.833 17:35:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.833 17:35:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:39.833 17:35:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:39.833 17:35:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.833 17:35:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.834 17:35:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.834 17:35:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.834 17:35:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:39.834 17:35:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.834 17:35:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.834 17:35:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.834 17:35:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.834 17:35:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.834 17:35:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.834 17:35:06 -- paths/export.sh@5 -- # export PATH 00:18:39.834 17:35:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.834 17:35:06 -- nvmf/common.sh@47 -- # : 0 00:18:39.834 17:35:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:39.834 17:35:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:39.834 17:35:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.834 17:35:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.834 17:35:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.834 17:35:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:39.834 17:35:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:39.834 17:35:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:39.834 17:35:06 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:39.834 17:35:06 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:39.834 17:35:06 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:39.834 17:35:06 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:39.834 17:35:06 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:39.834 17:35:06 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:39.834 17:35:06 -- host/auth.sh@21 -- # keys=() 00:18:39.834 17:35:06 -- host/auth.sh@77 -- # nvmftestinit 00:18:39.834 17:35:06 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:39.834 17:35:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.834 17:35:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:39.834 17:35:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:39.834 17:35:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:39.834 17:35:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.834 17:35:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.834 17:35:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.834 17:35:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:39.834 17:35:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:39.834 17:35:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:39.834 17:35:06 -- common/autotest_common.sh@10 -- # set +x 00:18:41.745 17:35:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:41.745 17:35:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:41.745 17:35:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:41.745 17:35:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:41.745 17:35:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:41.745 17:35:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:41.745 17:35:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:41.745 17:35:07 -- nvmf/common.sh@295 -- # net_devs=() 00:18:41.745 17:35:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:41.745 17:35:07 -- nvmf/common.sh@296 -- # e810=() 00:18:41.745 17:35:07 -- nvmf/common.sh@296 -- # local -ga e810 00:18:41.745 17:35:07 -- nvmf/common.sh@297 -- # x722=() 00:18:41.745 17:35:07 -- nvmf/common.sh@297 -- # local -ga x722 00:18:41.745 17:35:07 -- nvmf/common.sh@298 -- # mlx=() 00:18:41.745 17:35:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:41.745 17:35:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:41.745 17:35:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:41.745 17:35:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:41.745 17:35:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:41.745 17:35:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:41.745 17:35:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:41.745 17:35:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:41.745 17:35:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:41.745 17:35:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:41.745 17:35:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:41.745 17:35:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:41.745 17:35:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:41.745 17:35:08 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:41.745 17:35:08 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:41.745 17:35:08 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:41.745 17:35:08 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:41.745 17:35:08 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:41.745 17:35:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:41.745 17:35:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:41.745 17:35:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:18:41.745 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:18:41.745 17:35:08 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:41.745 17:35:08 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:41.745 17:35:08 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:18:41.745 17:35:08 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:41.745 17:35:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:41.745 17:35:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:18:41.745 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:18:41.745 17:35:08 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:41.745 17:35:08 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:41.745 17:35:08 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:18:41.745 17:35:08 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:41.745 17:35:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:41.745 17:35:08 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:41.745 17:35:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:41.745 17:35:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.745 17:35:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:41.745 17:35:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.745 17:35:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:18:41.745 Found net devices under 0000:09:00.0: mlx_0_0 00:18:41.745 17:35:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.745 17:35:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:41.745 17:35:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.745 17:35:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:41.746 17:35:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.746 17:35:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:18:41.746 Found net devices under 0000:09:00.1: mlx_0_1 00:18:41.746 17:35:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.746 17:35:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:41.746 17:35:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:41.746 17:35:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:41.746 17:35:08 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:41.746 17:35:08 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:41.746 17:35:08 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:41.746 17:35:08 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:41.746 17:35:08 -- nvmf/common.sh@58 -- # uname 00:18:41.746 17:35:08 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:41.746 17:35:08 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:41.746 17:35:08 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:41.746 17:35:08 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:41.746 17:35:08 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:41.746 17:35:08 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:41.746 17:35:08 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:41.746 17:35:08 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:41.746 17:35:08 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:41.746 17:35:08 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:41.746 17:35:08 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:41.746 17:35:08 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:41.746 17:35:08 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:41.746 17:35:08 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:41.746 17:35:08 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:41.746 17:35:08 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:41.746 17:35:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:41.746 17:35:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:41.746 17:35:08 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:41.746 17:35:08 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:41.746 17:35:08 -- nvmf/common.sh@105 -- # continue 2 00:18:41.746 17:35:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:41.746 17:35:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:41.746 17:35:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:41.746 17:35:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:41.746 17:35:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:41.746 17:35:08 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:41.746 17:35:08 -- nvmf/common.sh@105 -- # continue 2 00:18:41.746 17:35:08 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:41.746 17:35:08 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:41.746 17:35:08 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:41.746 17:35:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:41.746 17:35:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:41.746 17:35:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:41.746 17:35:08 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:41.746 17:35:08 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:41.746 17:35:08 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:41.746 82: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:41.746 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:18:41.746 altname enp9s0f0np0 00:18:41.746 inet 192.168.100.8/24 scope global mlx_0_0 00:18:41.746 valid_lft forever preferred_lft forever 00:18:41.746 17:35:08 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:41.746 17:35:08 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:41.746 17:35:08 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:41.746 17:35:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:41.746 17:35:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:41.746 17:35:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:41.746 17:35:08 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:41.746 17:35:08 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:41.746 17:35:08 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:41.746 83: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:41.746 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:18:41.746 altname enp9s0f1np1 00:18:41.746 inet 192.168.100.9/24 scope global mlx_0_1 00:18:41.746 valid_lft forever preferred_lft forever 00:18:41.746 17:35:08 -- nvmf/common.sh@411 -- # return 0 00:18:41.746 17:35:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:41.746 17:35:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:41.746 17:35:08 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:41.746 17:35:08 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:41.746 17:35:08 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:41.746 17:35:08 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:41.746 17:35:08 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:41.746 17:35:08 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:41.746 17:35:08 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:41.746 17:35:08 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:41.746 17:35:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:41.746 17:35:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:41.746 17:35:08 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:41.746 17:35:08 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:41.746 17:35:08 -- nvmf/common.sh@105 -- # continue 2 00:18:41.746 17:35:08 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:41.746 17:35:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:41.746 17:35:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:41.746 17:35:08 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:41.746 17:35:08 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:41.746 17:35:08 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:41.746 17:35:08 -- nvmf/common.sh@105 -- # continue 2 00:18:41.746 17:35:08 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:41.746 17:35:08 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:41.746 17:35:08 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:41.746 17:35:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:41.746 17:35:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:41.746 17:35:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:41.746 17:35:08 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:41.746 17:35:08 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:41.746 17:35:08 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:41.746 17:35:08 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:41.746 17:35:08 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:41.746 17:35:08 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:41.746 17:35:08 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:41.746 192.168.100.9' 00:18:41.746 17:35:08 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:41.746 192.168.100.9' 00:18:41.746 17:35:08 -- nvmf/common.sh@446 -- # head -n 1 00:18:41.746 17:35:08 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:41.746 17:35:08 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:41.746 192.168.100.9' 00:18:41.746 17:35:08 -- nvmf/common.sh@447 -- # tail -n +2 00:18:41.746 17:35:08 -- nvmf/common.sh@447 -- # head -n 1 00:18:41.746 17:35:08 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:41.746 17:35:08 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:41.746 17:35:08 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:41.746 17:35:08 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:41.746 17:35:08 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:41.746 17:35:08 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:41.746 17:35:08 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:18:41.746 17:35:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:41.746 17:35:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:41.746 17:35:08 -- common/autotest_common.sh@10 -- # set +x 00:18:41.746 17:35:08 -- nvmf/common.sh@470 -- # nvmfpid=2060867 00:18:41.746 17:35:08 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:41.746 17:35:08 -- nvmf/common.sh@471 -- # waitforlisten 2060867 00:18:41.746 17:35:08 -- common/autotest_common.sh@817 -- # '[' -z 2060867 ']' 00:18:41.746 17:35:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.746 17:35:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:41.746 17:35:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.746 17:35:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:41.746 17:35:08 -- common/autotest_common.sh@10 -- # set +x 00:18:42.005 17:35:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:42.005 17:35:08 -- common/autotest_common.sh@850 -- # return 0 00:18:42.005 17:35:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:42.005 17:35:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:42.005 17:35:08 -- common/autotest_common.sh@10 -- # set +x 00:18:42.005 17:35:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.005 17:35:08 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:42.005 17:35:08 -- host/auth.sh@81 -- # gen_key null 32 00:18:42.005 17:35:08 -- host/auth.sh@53 -- # local digest len file key 00:18:42.005 17:35:08 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:42.005 17:35:08 -- host/auth.sh@54 -- # local -A digests 00:18:42.005 17:35:08 -- host/auth.sh@56 -- # digest=null 00:18:42.005 17:35:08 -- host/auth.sh@56 -- # len=32 00:18:42.005 17:35:08 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:42.005 17:35:08 -- host/auth.sh@57 -- # key=0e1c1b1eb897f9b82e19c76e8570e847 00:18:42.005 17:35:08 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:18:42.005 17:35:08 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.xPx 00:18:42.005 17:35:08 -- host/auth.sh@59 -- # format_dhchap_key 0e1c1b1eb897f9b82e19c76e8570e847 0 00:18:42.005 17:35:08 -- nvmf/common.sh@708 -- # format_key DHHC-1 0e1c1b1eb897f9b82e19c76e8570e847 0 00:18:42.005 17:35:08 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:42.005 17:35:08 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:42.005 17:35:08 -- nvmf/common.sh@693 -- # key=0e1c1b1eb897f9b82e19c76e8570e847 00:18:42.005 17:35:08 -- nvmf/common.sh@693 -- # digest=0 00:18:42.005 17:35:08 -- nvmf/common.sh@694 -- # python - 00:18:42.264 17:35:08 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.xPx 00:18:42.264 17:35:08 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.xPx 00:18:42.264 17:35:08 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.xPx 00:18:42.264 17:35:08 -- host/auth.sh@82 -- # gen_key null 48 00:18:42.264 17:35:08 -- host/auth.sh@53 -- # local digest len file key 00:18:42.264 17:35:08 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:42.264 17:35:08 -- host/auth.sh@54 -- # local -A digests 00:18:42.264 17:35:08 -- host/auth.sh@56 -- # digest=null 00:18:42.264 17:35:08 -- host/auth.sh@56 -- # len=48 00:18:42.264 17:35:08 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:42.264 17:35:08 -- host/auth.sh@57 -- # key=cfbdb09c88ab16cf6436cf906add3d6d28662fe7c0603127 00:18:42.264 17:35:08 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:18:42.264 17:35:08 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.HAK 00:18:42.264 17:35:08 -- host/auth.sh@59 -- # format_dhchap_key cfbdb09c88ab16cf6436cf906add3d6d28662fe7c0603127 0 00:18:42.264 17:35:08 -- nvmf/common.sh@708 -- # format_key DHHC-1 cfbdb09c88ab16cf6436cf906add3d6d28662fe7c0603127 0 00:18:42.264 17:35:08 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:42.264 17:35:08 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:42.264 17:35:08 -- nvmf/common.sh@693 -- # key=cfbdb09c88ab16cf6436cf906add3d6d28662fe7c0603127 00:18:42.264 17:35:08 -- nvmf/common.sh@693 -- # digest=0 00:18:42.264 17:35:08 -- nvmf/common.sh@694 -- # python - 00:18:42.264 17:35:08 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.HAK 00:18:42.264 17:35:08 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.HAK 00:18:42.264 17:35:08 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.HAK 00:18:42.264 17:35:08 -- host/auth.sh@83 -- # gen_key sha256 32 00:18:42.264 17:35:08 -- host/auth.sh@53 -- # local digest len file key 00:18:42.264 17:35:08 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:42.264 17:35:08 -- host/auth.sh@54 -- # local -A digests 00:18:42.264 17:35:08 -- host/auth.sh@56 -- # digest=sha256 00:18:42.264 17:35:08 -- host/auth.sh@56 -- # len=32 00:18:42.264 17:35:08 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:42.264 17:35:08 -- host/auth.sh@57 -- # key=2c6f9de014ef7942e17564bdbeba03b8 00:18:42.264 17:35:08 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:18:42.264 17:35:08 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.JKm 00:18:42.264 17:35:08 -- host/auth.sh@59 -- # format_dhchap_key 2c6f9de014ef7942e17564bdbeba03b8 1 00:18:42.264 17:35:08 -- nvmf/common.sh@708 -- # format_key DHHC-1 2c6f9de014ef7942e17564bdbeba03b8 1 00:18:42.264 17:35:08 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:42.264 17:35:08 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:42.264 17:35:08 -- nvmf/common.sh@693 -- # key=2c6f9de014ef7942e17564bdbeba03b8 00:18:42.264 17:35:08 -- nvmf/common.sh@693 -- # digest=1 00:18:42.264 17:35:08 -- nvmf/common.sh@694 -- # python - 00:18:42.264 17:35:08 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.JKm 00:18:42.264 17:35:08 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.JKm 00:18:42.264 17:35:08 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.JKm 00:18:42.264 17:35:08 -- host/auth.sh@84 -- # gen_key sha384 48 00:18:42.264 17:35:08 -- host/auth.sh@53 -- # local digest len file key 00:18:42.264 17:35:08 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:42.264 17:35:08 -- host/auth.sh@54 -- # local -A digests 00:18:42.264 17:35:08 -- host/auth.sh@56 -- # digest=sha384 00:18:42.264 17:35:08 -- host/auth.sh@56 -- # len=48 00:18:42.264 17:35:08 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:42.264 17:35:08 -- host/auth.sh@57 -- # key=41665c5aeec56c2f321de6cb3e773a2202c6e59a1025cbdf 00:18:42.264 17:35:08 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:18:42.264 17:35:08 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.qoH 00:18:42.264 17:35:08 -- host/auth.sh@59 -- # format_dhchap_key 41665c5aeec56c2f321de6cb3e773a2202c6e59a1025cbdf 2 00:18:42.264 17:35:08 -- nvmf/common.sh@708 -- # format_key DHHC-1 41665c5aeec56c2f321de6cb3e773a2202c6e59a1025cbdf 2 00:18:42.264 17:35:08 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:42.264 17:35:08 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:42.264 17:35:08 -- nvmf/common.sh@693 -- # key=41665c5aeec56c2f321de6cb3e773a2202c6e59a1025cbdf 00:18:42.264 17:35:08 -- nvmf/common.sh@693 -- # digest=2 00:18:42.264 17:35:08 -- nvmf/common.sh@694 -- # python - 00:18:42.264 17:35:08 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.qoH 00:18:42.264 17:35:08 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.qoH 00:18:42.264 17:35:08 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.qoH 00:18:42.264 17:35:08 -- host/auth.sh@85 -- # gen_key sha512 64 00:18:42.264 17:35:08 -- host/auth.sh@53 -- # local digest len file key 00:18:42.264 17:35:08 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:42.264 17:35:08 -- host/auth.sh@54 -- # local -A digests 00:18:42.264 17:35:08 -- host/auth.sh@56 -- # digest=sha512 00:18:42.264 17:35:08 -- host/auth.sh@56 -- # len=64 00:18:42.264 17:35:08 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:42.264 17:35:08 -- host/auth.sh@57 -- # key=4eebb10db9de773cbb4aa5fee935a9dfb16a99c88316efc7753f7d22b8d0b87c 00:18:42.264 17:35:08 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:18:42.264 17:35:08 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.dOc 00:18:42.264 17:35:08 -- host/auth.sh@59 -- # format_dhchap_key 4eebb10db9de773cbb4aa5fee935a9dfb16a99c88316efc7753f7d22b8d0b87c 3 00:18:42.264 17:35:08 -- nvmf/common.sh@708 -- # format_key DHHC-1 4eebb10db9de773cbb4aa5fee935a9dfb16a99c88316efc7753f7d22b8d0b87c 3 00:18:42.264 17:35:08 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:42.264 17:35:08 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:42.264 17:35:08 -- nvmf/common.sh@693 -- # key=4eebb10db9de773cbb4aa5fee935a9dfb16a99c88316efc7753f7d22b8d0b87c 00:18:42.264 17:35:08 -- nvmf/common.sh@693 -- # digest=3 00:18:42.264 17:35:08 -- nvmf/common.sh@694 -- # python - 00:18:42.264 17:35:08 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.dOc 00:18:42.264 17:35:08 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.dOc 00:18:42.264 17:35:08 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.dOc 00:18:42.264 17:35:08 -- host/auth.sh@87 -- # waitforlisten 2060867 00:18:42.264 17:35:08 -- common/autotest_common.sh@817 -- # '[' -z 2060867 ']' 00:18:42.264 17:35:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.264 17:35:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:42.264 17:35:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.264 17:35:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:42.264 17:35:08 -- common/autotest_common.sh@10 -- # set +x 00:18:42.524 17:35:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:42.524 17:35:09 -- common/autotest_common.sh@850 -- # return 0 00:18:42.524 17:35:09 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:42.524 17:35:09 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xPx 00:18:42.524 17:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.524 17:35:09 -- common/autotest_common.sh@10 -- # set +x 00:18:42.524 17:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.524 17:35:09 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:42.524 17:35:09 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.HAK 00:18:42.524 17:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.524 17:35:09 -- common/autotest_common.sh@10 -- # set +x 00:18:42.524 17:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.524 17:35:09 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:42.524 17:35:09 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.JKm 00:18:42.524 17:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.524 17:35:09 -- common/autotest_common.sh@10 -- # set +x 00:18:42.782 17:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.782 17:35:09 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:42.782 17:35:09 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.qoH 00:18:42.782 17:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.782 17:35:09 -- common/autotest_common.sh@10 -- # set +x 00:18:42.782 17:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.782 17:35:09 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:42.782 17:35:09 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.dOc 00:18:42.782 17:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:42.782 17:35:09 -- common/autotest_common.sh@10 -- # set +x 00:18:42.782 17:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:42.782 17:35:09 -- host/auth.sh@92 -- # nvmet_auth_init 00:18:42.782 17:35:09 -- host/auth.sh@35 -- # get_main_ns_ip 00:18:42.782 17:35:09 -- nvmf/common.sh@717 -- # local ip 00:18:42.782 17:35:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:42.782 17:35:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:42.782 17:35:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.782 17:35:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.782 17:35:09 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:42.782 17:35:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:42.782 17:35:09 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:42.782 17:35:09 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:42.782 17:35:09 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:42.782 17:35:09 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:18:42.782 17:35:09 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:18:42.782 17:35:09 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:18:42.782 17:35:09 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:42.782 17:35:09 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:42.782 17:35:09 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:42.782 17:35:09 -- nvmf/common.sh@628 -- # local block nvme 00:18:42.782 17:35:09 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:18:42.782 17:35:09 -- nvmf/common.sh@631 -- # modprobe nvmet 00:18:42.782 17:35:09 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:42.782 17:35:09 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:18:43.718 Waiting for block devices as requested 00:18:43.718 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:18:43.718 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:43.718 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:43.976 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:43.976 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:43.976 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:43.976 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:44.235 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:44.235 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:44.235 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:18:44.235 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:18:44.495 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:18:44.495 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:18:44.495 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:18:44.753 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:18:44.753 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:18:44.753 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:18:45.322 17:35:11 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:18:45.322 17:35:11 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:45.322 17:35:11 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:18:45.322 17:35:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:18:45.322 17:35:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:45.322 17:35:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:45.322 17:35:11 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:18:45.322 17:35:11 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:45.322 17:35:11 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:18:45.322 No valid GPT data, bailing 00:18:45.322 17:35:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:45.322 17:35:11 -- scripts/common.sh@391 -- # pt= 00:18:45.322 17:35:11 -- scripts/common.sh@392 -- # return 1 00:18:45.322 17:35:11 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:18:45.322 17:35:11 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:18:45.322 17:35:11 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:45.322 17:35:11 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:45.322 17:35:11 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:45.322 17:35:11 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:45.322 17:35:11 -- nvmf/common.sh@656 -- # echo 1 00:18:45.322 17:35:11 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:18:45.322 17:35:11 -- nvmf/common.sh@658 -- # echo 1 00:18:45.322 17:35:11 -- nvmf/common.sh@660 -- # echo 192.168.100.8 00:18:45.322 17:35:11 -- nvmf/common.sh@661 -- # echo rdma 00:18:45.322 17:35:11 -- nvmf/common.sh@662 -- # echo 4420 00:18:45.322 17:35:11 -- nvmf/common.sh@663 -- # echo ipv4 00:18:45.322 17:35:11 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:45.322 17:35:11 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 192.168.100.8 -t rdma -s 4420 00:18:45.322 00:18:45.322 Discovery Log Number of Records 2, Generation counter 2 00:18:45.322 =====Discovery Log Entry 0====== 00:18:45.322 trtype: rdma 00:18:45.322 adrfam: ipv4 00:18:45.322 subtype: current discovery subsystem 00:18:45.322 treq: not specified, sq flow control disable supported 00:18:45.322 portid: 1 00:18:45.322 trsvcid: 4420 00:18:45.322 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:45.322 traddr: 192.168.100.8 00:18:45.322 eflags: none 00:18:45.322 rdma_prtype: not specified 00:18:45.322 rdma_qptype: connected 00:18:45.322 rdma_cms: rdma-cm 00:18:45.322 rdma_pkey: 0x0000 00:18:45.322 =====Discovery Log Entry 1====== 00:18:45.322 trtype: rdma 00:18:45.322 adrfam: ipv4 00:18:45.322 subtype: nvme subsystem 00:18:45.322 treq: not specified, sq flow control disable supported 00:18:45.322 portid: 1 00:18:45.322 trsvcid: 4420 00:18:45.322 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:45.322 traddr: 192.168.100.8 00:18:45.322 eflags: none 00:18:45.322 rdma_prtype: not specified 00:18:45.322 rdma_qptype: connected 00:18:45.322 rdma_cms: rdma-cm 00:18:45.322 rdma_pkey: 0x0000 00:18:45.322 17:35:11 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:45.322 17:35:11 -- host/auth.sh@37 -- # echo 0 00:18:45.322 17:35:11 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:45.322 17:35:11 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:45.322 17:35:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:45.322 17:35:11 -- host/auth.sh@44 -- # digest=sha256 00:18:45.322 17:35:11 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:45.322 17:35:11 -- host/auth.sh@44 -- # keyid=1 00:18:45.322 17:35:11 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:18:45.322 17:35:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:45.322 17:35:11 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:45.322 17:35:11 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:18:45.322 17:35:11 -- host/auth.sh@100 -- # IFS=, 00:18:45.322 17:35:11 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:18:45.323 17:35:11 -- host/auth.sh@100 -- # IFS=, 00:18:45.323 17:35:11 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:45.323 17:35:11 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:45.323 17:35:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:45.323 17:35:11 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:18:45.323 17:35:11 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:45.323 17:35:11 -- host/auth.sh@68 -- # keyid=1 00:18:45.323 17:35:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:45.323 17:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.323 17:35:11 -- common/autotest_common.sh@10 -- # set +x 00:18:45.323 17:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.323 17:35:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:45.323 17:35:11 -- nvmf/common.sh@717 -- # local ip 00:18:45.323 17:35:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:45.323 17:35:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:45.323 17:35:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.323 17:35:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.323 17:35:11 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:45.323 17:35:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:45.323 17:35:11 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:45.323 17:35:11 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:45.323 17:35:11 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:45.323 17:35:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:45.323 17:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.323 17:35:11 -- common/autotest_common.sh@10 -- # set +x 00:18:45.583 nvme0n1 00:18:45.583 17:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.583 17:35:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.583 17:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.583 17:35:12 -- common/autotest_common.sh@10 -- # set +x 00:18:45.583 17:35:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:45.583 17:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.583 17:35:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.583 17:35:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.583 17:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.583 17:35:12 -- common/autotest_common.sh@10 -- # set +x 00:18:45.583 17:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.583 17:35:12 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:18:45.583 17:35:12 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.583 17:35:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:45.583 17:35:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:45.583 17:35:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:45.583 17:35:12 -- host/auth.sh@44 -- # digest=sha256 00:18:45.583 17:35:12 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:45.583 17:35:12 -- host/auth.sh@44 -- # keyid=0 00:18:45.583 17:35:12 -- host/auth.sh@45 -- # key=DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:18:45.583 17:35:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:45.583 17:35:12 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:45.583 17:35:12 -- host/auth.sh@49 -- # echo DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:18:45.583 17:35:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:18:45.583 17:35:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:45.583 17:35:12 -- host/auth.sh@68 -- # digest=sha256 00:18:45.583 17:35:12 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:45.583 17:35:12 -- host/auth.sh@68 -- # keyid=0 00:18:45.583 17:35:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:45.583 17:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.583 17:35:12 -- common/autotest_common.sh@10 -- # set +x 00:18:45.583 17:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.583 17:35:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:45.583 17:35:12 -- nvmf/common.sh@717 -- # local ip 00:18:45.583 17:35:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:45.583 17:35:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:45.583 17:35:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.583 17:35:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.583 17:35:12 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:45.583 17:35:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:45.583 17:35:12 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:45.583 17:35:12 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:45.583 17:35:12 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:45.583 17:35:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:45.583 17:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.583 17:35:12 -- common/autotest_common.sh@10 -- # set +x 00:18:45.843 nvme0n1 00:18:45.843 17:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.843 17:35:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.843 17:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.843 17:35:12 -- common/autotest_common.sh@10 -- # set +x 00:18:45.843 17:35:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:45.843 17:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.843 17:35:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.843 17:35:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.843 17:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.843 17:35:12 -- common/autotest_common.sh@10 -- # set +x 00:18:45.843 17:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.843 17:35:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:45.843 17:35:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:45.843 17:35:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:45.843 17:35:12 -- host/auth.sh@44 -- # digest=sha256 00:18:45.843 17:35:12 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:45.843 17:35:12 -- host/auth.sh@44 -- # keyid=1 00:18:45.843 17:35:12 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:18:45.843 17:35:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:45.843 17:35:12 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:45.843 17:35:12 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:18:45.843 17:35:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:18:45.843 17:35:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:45.843 17:35:12 -- host/auth.sh@68 -- # digest=sha256 00:18:45.843 17:35:12 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:45.843 17:35:12 -- host/auth.sh@68 -- # keyid=1 00:18:45.843 17:35:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:45.843 17:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.843 17:35:12 -- common/autotest_common.sh@10 -- # set +x 00:18:45.843 17:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.843 17:35:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:45.843 17:35:12 -- nvmf/common.sh@717 -- # local ip 00:18:45.843 17:35:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:45.843 17:35:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:45.843 17:35:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.843 17:35:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.843 17:35:12 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:45.843 17:35:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:45.843 17:35:12 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:45.843 17:35:12 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:45.843 17:35:12 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:45.843 17:35:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:45.843 17:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.843 17:35:12 -- common/autotest_common.sh@10 -- # set +x 00:18:46.102 nvme0n1 00:18:46.102 17:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.102 17:35:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.102 17:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.102 17:35:12 -- common/autotest_common.sh@10 -- # set +x 00:18:46.102 17:35:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:46.102 17:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.102 17:35:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.102 17:35:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.102 17:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.102 17:35:12 -- common/autotest_common.sh@10 -- # set +x 00:18:46.102 17:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.102 17:35:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:46.102 17:35:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:46.102 17:35:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:46.102 17:35:12 -- host/auth.sh@44 -- # digest=sha256 00:18:46.102 17:35:12 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:46.102 17:35:12 -- host/auth.sh@44 -- # keyid=2 00:18:46.102 17:35:12 -- host/auth.sh@45 -- # key=DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:18:46.102 17:35:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:46.102 17:35:12 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:46.102 17:35:12 -- host/auth.sh@49 -- # echo DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:18:46.102 17:35:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:18:46.102 17:35:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:46.102 17:35:12 -- host/auth.sh@68 -- # digest=sha256 00:18:46.102 17:35:12 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:46.102 17:35:12 -- host/auth.sh@68 -- # keyid=2 00:18:46.102 17:35:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.102 17:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.102 17:35:12 -- common/autotest_common.sh@10 -- # set +x 00:18:46.102 17:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.102 17:35:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:46.102 17:35:12 -- nvmf/common.sh@717 -- # local ip 00:18:46.102 17:35:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:46.102 17:35:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:46.102 17:35:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.102 17:35:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.102 17:35:12 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:46.102 17:35:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:46.102 17:35:12 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:46.102 17:35:12 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:46.102 17:35:12 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:46.102 17:35:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:46.102 17:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.102 17:35:12 -- common/autotest_common.sh@10 -- # set +x 00:18:46.360 nvme0n1 00:18:46.360 17:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.360 17:35:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.360 17:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.360 17:35:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:46.360 17:35:12 -- common/autotest_common.sh@10 -- # set +x 00:18:46.360 17:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.360 17:35:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.360 17:35:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.360 17:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.360 17:35:12 -- common/autotest_common.sh@10 -- # set +x 00:18:46.360 17:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.360 17:35:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:46.360 17:35:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:46.360 17:35:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:46.360 17:35:12 -- host/auth.sh@44 -- # digest=sha256 00:18:46.360 17:35:12 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:46.360 17:35:12 -- host/auth.sh@44 -- # keyid=3 00:18:46.360 17:35:12 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:18:46.360 17:35:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:46.360 17:35:12 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:46.360 17:35:12 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:18:46.360 17:35:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:18:46.360 17:35:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:46.360 17:35:12 -- host/auth.sh@68 -- # digest=sha256 00:18:46.360 17:35:12 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:46.360 17:35:12 -- host/auth.sh@68 -- # keyid=3 00:18:46.360 17:35:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.360 17:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.360 17:35:12 -- common/autotest_common.sh@10 -- # set +x 00:18:46.360 17:35:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.360 17:35:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:46.360 17:35:12 -- nvmf/common.sh@717 -- # local ip 00:18:46.360 17:35:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:46.360 17:35:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:46.360 17:35:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.360 17:35:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.360 17:35:12 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:46.360 17:35:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:46.360 17:35:12 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:46.360 17:35:12 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:46.360 17:35:12 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:46.360 17:35:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:46.360 17:35:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.360 17:35:12 -- common/autotest_common.sh@10 -- # set +x 00:18:46.618 nvme0n1 00:18:46.618 17:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.618 17:35:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.618 17:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.618 17:35:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:46.618 17:35:13 -- common/autotest_common.sh@10 -- # set +x 00:18:46.618 17:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.618 17:35:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.618 17:35:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.618 17:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.618 17:35:13 -- common/autotest_common.sh@10 -- # set +x 00:18:46.618 17:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.618 17:35:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:46.618 17:35:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:46.618 17:35:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:46.618 17:35:13 -- host/auth.sh@44 -- # digest=sha256 00:18:46.618 17:35:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:46.618 17:35:13 -- host/auth.sh@44 -- # keyid=4 00:18:46.618 17:35:13 -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:18:46.618 17:35:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:46.618 17:35:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:46.618 17:35:13 -- host/auth.sh@49 -- # echo DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:18:46.618 17:35:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:18:46.618 17:35:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:46.618 17:35:13 -- host/auth.sh@68 -- # digest=sha256 00:18:46.618 17:35:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:46.618 17:35:13 -- host/auth.sh@68 -- # keyid=4 00:18:46.618 17:35:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:46.618 17:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.618 17:35:13 -- common/autotest_common.sh@10 -- # set +x 00:18:46.618 17:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.618 17:35:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:46.618 17:35:13 -- nvmf/common.sh@717 -- # local ip 00:18:46.618 17:35:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:46.618 17:35:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:46.618 17:35:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.618 17:35:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.618 17:35:13 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:46.618 17:35:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:46.618 17:35:13 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:46.618 17:35:13 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:46.618 17:35:13 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:46.618 17:35:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:46.618 17:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.618 17:35:13 -- common/autotest_common.sh@10 -- # set +x 00:18:46.877 nvme0n1 00:18:46.877 17:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.877 17:35:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.877 17:35:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:46.877 17:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.877 17:35:13 -- common/autotest_common.sh@10 -- # set +x 00:18:46.877 17:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.877 17:35:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.877 17:35:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.877 17:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.877 17:35:13 -- common/autotest_common.sh@10 -- # set +x 00:18:46.877 17:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.877 17:35:13 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.877 17:35:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:46.877 17:35:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:46.877 17:35:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:46.877 17:35:13 -- host/auth.sh@44 -- # digest=sha256 00:18:46.877 17:35:13 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:46.877 17:35:13 -- host/auth.sh@44 -- # keyid=0 00:18:46.877 17:35:13 -- host/auth.sh@45 -- # key=DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:18:46.877 17:35:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:46.877 17:35:13 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:46.877 17:35:13 -- host/auth.sh@49 -- # echo DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:18:46.877 17:35:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:18:46.877 17:35:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:46.877 17:35:13 -- host/auth.sh@68 -- # digest=sha256 00:18:46.877 17:35:13 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:46.877 17:35:13 -- host/auth.sh@68 -- # keyid=0 00:18:46.877 17:35:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:46.877 17:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.877 17:35:13 -- common/autotest_common.sh@10 -- # set +x 00:18:46.877 17:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.877 17:35:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:46.877 17:35:13 -- nvmf/common.sh@717 -- # local ip 00:18:46.877 17:35:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:46.877 17:35:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:46.877 17:35:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.877 17:35:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.877 17:35:13 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:46.877 17:35:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:46.877 17:35:13 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:46.877 17:35:13 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:46.877 17:35:13 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:46.877 17:35:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:46.877 17:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.877 17:35:13 -- common/autotest_common.sh@10 -- # set +x 00:18:47.135 nvme0n1 00:18:47.135 17:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.135 17:35:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.135 17:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.135 17:35:13 -- common/autotest_common.sh@10 -- # set +x 00:18:47.135 17:35:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:47.135 17:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.135 17:35:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.135 17:35:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.135 17:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.135 17:35:13 -- common/autotest_common.sh@10 -- # set +x 00:18:47.135 17:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.135 17:35:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:47.135 17:35:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:47.135 17:35:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:47.135 17:35:13 -- host/auth.sh@44 -- # digest=sha256 00:18:47.135 17:35:13 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:47.135 17:35:13 -- host/auth.sh@44 -- # keyid=1 00:18:47.135 17:35:13 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:18:47.135 17:35:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:47.135 17:35:13 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:47.135 17:35:13 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:18:47.135 17:35:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:18:47.135 17:35:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:47.135 17:35:13 -- host/auth.sh@68 -- # digest=sha256 00:18:47.135 17:35:13 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:47.135 17:35:13 -- host/auth.sh@68 -- # keyid=1 00:18:47.135 17:35:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:47.135 17:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.135 17:35:13 -- common/autotest_common.sh@10 -- # set +x 00:18:47.135 17:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.135 17:35:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:47.135 17:35:13 -- nvmf/common.sh@717 -- # local ip 00:18:47.135 17:35:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:47.135 17:35:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:47.135 17:35:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.135 17:35:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.135 17:35:13 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:47.135 17:35:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:47.135 17:35:13 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:47.135 17:35:13 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:47.135 17:35:13 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:47.135 17:35:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:47.135 17:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.135 17:35:13 -- common/autotest_common.sh@10 -- # set +x 00:18:47.392 nvme0n1 00:18:47.392 17:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.392 17:35:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.392 17:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.392 17:35:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:47.392 17:35:13 -- common/autotest_common.sh@10 -- # set +x 00:18:47.392 17:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.392 17:35:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.392 17:35:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.392 17:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.392 17:35:13 -- common/autotest_common.sh@10 -- # set +x 00:18:47.392 17:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.392 17:35:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:47.392 17:35:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:47.392 17:35:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:47.392 17:35:13 -- host/auth.sh@44 -- # digest=sha256 00:18:47.392 17:35:13 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:47.392 17:35:13 -- host/auth.sh@44 -- # keyid=2 00:18:47.392 17:35:13 -- host/auth.sh@45 -- # key=DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:18:47.392 17:35:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:47.392 17:35:13 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:47.392 17:35:13 -- host/auth.sh@49 -- # echo DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:18:47.392 17:35:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:18:47.392 17:35:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:47.392 17:35:13 -- host/auth.sh@68 -- # digest=sha256 00:18:47.392 17:35:13 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:47.392 17:35:13 -- host/auth.sh@68 -- # keyid=2 00:18:47.392 17:35:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:47.392 17:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.392 17:35:13 -- common/autotest_common.sh@10 -- # set +x 00:18:47.392 17:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.392 17:35:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:47.392 17:35:13 -- nvmf/common.sh@717 -- # local ip 00:18:47.392 17:35:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:47.392 17:35:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:47.392 17:35:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.392 17:35:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.392 17:35:13 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:47.392 17:35:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:47.392 17:35:13 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:47.392 17:35:13 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:47.392 17:35:13 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:47.392 17:35:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:47.392 17:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.392 17:35:13 -- common/autotest_common.sh@10 -- # set +x 00:18:47.650 nvme0n1 00:18:47.650 17:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.650 17:35:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.650 17:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.650 17:35:14 -- common/autotest_common.sh@10 -- # set +x 00:18:47.650 17:35:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:47.650 17:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.650 17:35:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.650 17:35:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.650 17:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.650 17:35:14 -- common/autotest_common.sh@10 -- # set +x 00:18:47.909 17:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.909 17:35:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:47.909 17:35:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:47.909 17:35:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:47.909 17:35:14 -- host/auth.sh@44 -- # digest=sha256 00:18:47.909 17:35:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:47.909 17:35:14 -- host/auth.sh@44 -- # keyid=3 00:18:47.909 17:35:14 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:18:47.909 17:35:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:47.909 17:35:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:47.909 17:35:14 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:18:47.909 17:35:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:18:47.909 17:35:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:47.909 17:35:14 -- host/auth.sh@68 -- # digest=sha256 00:18:47.909 17:35:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:47.909 17:35:14 -- host/auth.sh@68 -- # keyid=3 00:18:47.909 17:35:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:47.909 17:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.909 17:35:14 -- common/autotest_common.sh@10 -- # set +x 00:18:47.909 17:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.909 17:35:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:47.909 17:35:14 -- nvmf/common.sh@717 -- # local ip 00:18:47.909 17:35:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:47.909 17:35:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:47.909 17:35:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.909 17:35:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.909 17:35:14 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:47.909 17:35:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:47.909 17:35:14 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:47.909 17:35:14 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:47.909 17:35:14 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:47.909 17:35:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:47.909 17:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.909 17:35:14 -- common/autotest_common.sh@10 -- # set +x 00:18:47.909 nvme0n1 00:18:47.909 17:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.909 17:35:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.909 17:35:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:47.909 17:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.909 17:35:14 -- common/autotest_common.sh@10 -- # set +x 00:18:47.909 17:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.168 17:35:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.168 17:35:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.168 17:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.168 17:35:14 -- common/autotest_common.sh@10 -- # set +x 00:18:48.168 17:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.168 17:35:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:48.168 17:35:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:48.168 17:35:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:48.168 17:35:14 -- host/auth.sh@44 -- # digest=sha256 00:18:48.168 17:35:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:48.168 17:35:14 -- host/auth.sh@44 -- # keyid=4 00:18:48.168 17:35:14 -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:18:48.168 17:35:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:48.168 17:35:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:48.168 17:35:14 -- host/auth.sh@49 -- # echo DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:18:48.168 17:35:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:18:48.168 17:35:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:48.168 17:35:14 -- host/auth.sh@68 -- # digest=sha256 00:18:48.168 17:35:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:48.168 17:35:14 -- host/auth.sh@68 -- # keyid=4 00:18:48.168 17:35:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:48.168 17:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.168 17:35:14 -- common/autotest_common.sh@10 -- # set +x 00:18:48.168 17:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.168 17:35:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:48.168 17:35:14 -- nvmf/common.sh@717 -- # local ip 00:18:48.168 17:35:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:48.168 17:35:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:48.168 17:35:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.168 17:35:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.168 17:35:14 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:48.168 17:35:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:48.168 17:35:14 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:48.168 17:35:14 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:48.168 17:35:14 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:48.168 17:35:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:48.168 17:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.168 17:35:14 -- common/autotest_common.sh@10 -- # set +x 00:18:48.427 nvme0n1 00:18:48.427 17:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.427 17:35:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.427 17:35:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:48.427 17:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.427 17:35:14 -- common/autotest_common.sh@10 -- # set +x 00:18:48.427 17:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.427 17:35:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.427 17:35:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.427 17:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.427 17:35:14 -- common/autotest_common.sh@10 -- # set +x 00:18:48.427 17:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.427 17:35:14 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.427 17:35:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:48.427 17:35:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:48.427 17:35:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:48.427 17:35:14 -- host/auth.sh@44 -- # digest=sha256 00:18:48.427 17:35:14 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:48.427 17:35:14 -- host/auth.sh@44 -- # keyid=0 00:18:48.427 17:35:14 -- host/auth.sh@45 -- # key=DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:18:48.427 17:35:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:48.427 17:35:14 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:48.427 17:35:14 -- host/auth.sh@49 -- # echo DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:18:48.427 17:35:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:18:48.427 17:35:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:48.427 17:35:14 -- host/auth.sh@68 -- # digest=sha256 00:18:48.428 17:35:14 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:48.428 17:35:14 -- host/auth.sh@68 -- # keyid=0 00:18:48.428 17:35:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:48.428 17:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.428 17:35:14 -- common/autotest_common.sh@10 -- # set +x 00:18:48.428 17:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.428 17:35:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:48.428 17:35:14 -- nvmf/common.sh@717 -- # local ip 00:18:48.428 17:35:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:48.428 17:35:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:48.428 17:35:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.428 17:35:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.428 17:35:14 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:48.428 17:35:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:48.428 17:35:14 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:48.428 17:35:14 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:48.428 17:35:14 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:48.428 17:35:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:48.428 17:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.428 17:35:14 -- common/autotest_common.sh@10 -- # set +x 00:18:48.687 nvme0n1 00:18:48.687 17:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.687 17:35:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.687 17:35:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:48.687 17:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.687 17:35:15 -- common/autotest_common.sh@10 -- # set +x 00:18:48.687 17:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.687 17:35:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.687 17:35:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.687 17:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.687 17:35:15 -- common/autotest_common.sh@10 -- # set +x 00:18:48.687 17:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.687 17:35:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:48.687 17:35:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:48.687 17:35:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:48.687 17:35:15 -- host/auth.sh@44 -- # digest=sha256 00:18:48.687 17:35:15 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:48.687 17:35:15 -- host/auth.sh@44 -- # keyid=1 00:18:48.687 17:35:15 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:18:48.687 17:35:15 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:48.687 17:35:15 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:48.687 17:35:15 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:18:48.687 17:35:15 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:18:48.687 17:35:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:48.687 17:35:15 -- host/auth.sh@68 -- # digest=sha256 00:18:48.687 17:35:15 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:48.687 17:35:15 -- host/auth.sh@68 -- # keyid=1 00:18:48.687 17:35:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:48.687 17:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.687 17:35:15 -- common/autotest_common.sh@10 -- # set +x 00:18:48.687 17:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.687 17:35:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:48.687 17:35:15 -- nvmf/common.sh@717 -- # local ip 00:18:48.687 17:35:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:48.687 17:35:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:48.687 17:35:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.687 17:35:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.687 17:35:15 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:48.687 17:35:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:48.687 17:35:15 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:48.687 17:35:15 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:48.687 17:35:15 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:48.687 17:35:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:48.687 17:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.687 17:35:15 -- common/autotest_common.sh@10 -- # set +x 00:18:48.946 nvme0n1 00:18:48.946 17:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.946 17:35:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.946 17:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.946 17:35:15 -- common/autotest_common.sh@10 -- # set +x 00:18:48.946 17:35:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:49.206 17:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.206 17:35:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.206 17:35:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.206 17:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.206 17:35:15 -- common/autotest_common.sh@10 -- # set +x 00:18:49.206 17:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.206 17:35:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:49.206 17:35:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:49.206 17:35:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:49.206 17:35:15 -- host/auth.sh@44 -- # digest=sha256 00:18:49.206 17:35:15 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:49.206 17:35:15 -- host/auth.sh@44 -- # keyid=2 00:18:49.206 17:35:15 -- host/auth.sh@45 -- # key=DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:18:49.206 17:35:15 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:49.206 17:35:15 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:49.206 17:35:15 -- host/auth.sh@49 -- # echo DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:18:49.206 17:35:15 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:18:49.206 17:35:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:49.206 17:35:15 -- host/auth.sh@68 -- # digest=sha256 00:18:49.206 17:35:15 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:49.206 17:35:15 -- host/auth.sh@68 -- # keyid=2 00:18:49.206 17:35:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:49.206 17:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.206 17:35:15 -- common/autotest_common.sh@10 -- # set +x 00:18:49.206 17:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.206 17:35:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:49.206 17:35:15 -- nvmf/common.sh@717 -- # local ip 00:18:49.206 17:35:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:49.206 17:35:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:49.206 17:35:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.206 17:35:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.206 17:35:15 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:49.206 17:35:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:49.206 17:35:15 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:49.206 17:35:15 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:49.206 17:35:15 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:49.206 17:35:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:49.206 17:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.206 17:35:15 -- common/autotest_common.sh@10 -- # set +x 00:18:49.465 nvme0n1 00:18:49.465 17:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.465 17:35:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.465 17:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.466 17:35:15 -- common/autotest_common.sh@10 -- # set +x 00:18:49.466 17:35:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:49.466 17:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.466 17:35:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.466 17:35:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.466 17:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.466 17:35:15 -- common/autotest_common.sh@10 -- # set +x 00:18:49.466 17:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.466 17:35:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:49.466 17:35:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:49.466 17:35:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:49.466 17:35:15 -- host/auth.sh@44 -- # digest=sha256 00:18:49.466 17:35:15 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:49.466 17:35:15 -- host/auth.sh@44 -- # keyid=3 00:18:49.466 17:35:15 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:18:49.466 17:35:15 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:49.466 17:35:15 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:49.466 17:35:15 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:18:49.466 17:35:15 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:18:49.466 17:35:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:49.466 17:35:15 -- host/auth.sh@68 -- # digest=sha256 00:18:49.466 17:35:15 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:49.466 17:35:15 -- host/auth.sh@68 -- # keyid=3 00:18:49.466 17:35:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:49.466 17:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.466 17:35:15 -- common/autotest_common.sh@10 -- # set +x 00:18:49.466 17:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.466 17:35:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:49.466 17:35:15 -- nvmf/common.sh@717 -- # local ip 00:18:49.466 17:35:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:49.466 17:35:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:49.466 17:35:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.466 17:35:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.466 17:35:15 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:49.466 17:35:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:49.466 17:35:15 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:49.466 17:35:15 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:49.466 17:35:15 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:49.466 17:35:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:49.466 17:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.466 17:35:15 -- common/autotest_common.sh@10 -- # set +x 00:18:49.724 nvme0n1 00:18:49.724 17:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.724 17:35:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.724 17:35:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:49.724 17:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.724 17:35:16 -- common/autotest_common.sh@10 -- # set +x 00:18:49.984 17:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.984 17:35:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.984 17:35:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.984 17:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.984 17:35:16 -- common/autotest_common.sh@10 -- # set +x 00:18:49.984 17:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.984 17:35:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:49.984 17:35:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:49.984 17:35:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:49.984 17:35:16 -- host/auth.sh@44 -- # digest=sha256 00:18:49.984 17:35:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:49.984 17:35:16 -- host/auth.sh@44 -- # keyid=4 00:18:49.984 17:35:16 -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:18:49.984 17:35:16 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:49.984 17:35:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:49.984 17:35:16 -- host/auth.sh@49 -- # echo DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:18:49.984 17:35:16 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:18:49.984 17:35:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:49.984 17:35:16 -- host/auth.sh@68 -- # digest=sha256 00:18:49.984 17:35:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:49.984 17:35:16 -- host/auth.sh@68 -- # keyid=4 00:18:49.984 17:35:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:49.984 17:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.984 17:35:16 -- common/autotest_common.sh@10 -- # set +x 00:18:49.984 17:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.984 17:35:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:49.984 17:35:16 -- nvmf/common.sh@717 -- # local ip 00:18:49.984 17:35:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:49.984 17:35:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:49.984 17:35:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.984 17:35:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.984 17:35:16 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:49.984 17:35:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:49.984 17:35:16 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:49.984 17:35:16 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:49.984 17:35:16 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:49.984 17:35:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:49.984 17:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.984 17:35:16 -- common/autotest_common.sh@10 -- # set +x 00:18:50.244 nvme0n1 00:18:50.244 17:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.244 17:35:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.244 17:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.244 17:35:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:50.244 17:35:16 -- common/autotest_common.sh@10 -- # set +x 00:18:50.244 17:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.244 17:35:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.244 17:35:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.244 17:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.244 17:35:16 -- common/autotest_common.sh@10 -- # set +x 00:18:50.244 17:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.244 17:35:16 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.244 17:35:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:50.244 17:35:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:50.244 17:35:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:50.244 17:35:16 -- host/auth.sh@44 -- # digest=sha256 00:18:50.244 17:35:16 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:50.244 17:35:16 -- host/auth.sh@44 -- # keyid=0 00:18:50.244 17:35:16 -- host/auth.sh@45 -- # key=DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:18:50.244 17:35:16 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:50.244 17:35:16 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:50.244 17:35:16 -- host/auth.sh@49 -- # echo DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:18:50.244 17:35:16 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:18:50.244 17:35:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:50.244 17:35:16 -- host/auth.sh@68 -- # digest=sha256 00:18:50.244 17:35:16 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:50.244 17:35:16 -- host/auth.sh@68 -- # keyid=0 00:18:50.244 17:35:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:50.244 17:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.244 17:35:16 -- common/autotest_common.sh@10 -- # set +x 00:18:50.244 17:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.244 17:35:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:50.244 17:35:16 -- nvmf/common.sh@717 -- # local ip 00:18:50.244 17:35:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:50.244 17:35:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:50.244 17:35:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.244 17:35:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.244 17:35:16 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:50.244 17:35:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:50.244 17:35:16 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:50.244 17:35:16 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:50.244 17:35:16 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:50.244 17:35:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:50.244 17:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.244 17:35:16 -- common/autotest_common.sh@10 -- # set +x 00:18:50.813 nvme0n1 00:18:50.813 17:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.813 17:35:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:50.813 17:35:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.813 17:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.813 17:35:17 -- common/autotest_common.sh@10 -- # set +x 00:18:50.813 17:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.813 17:35:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.813 17:35:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.813 17:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.813 17:35:17 -- common/autotest_common.sh@10 -- # set +x 00:18:50.813 17:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.813 17:35:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:50.813 17:35:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:50.813 17:35:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:50.813 17:35:17 -- host/auth.sh@44 -- # digest=sha256 00:18:50.813 17:35:17 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:50.813 17:35:17 -- host/auth.sh@44 -- # keyid=1 00:18:50.813 17:35:17 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:18:50.813 17:35:17 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:50.813 17:35:17 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:50.813 17:35:17 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:18:50.813 17:35:17 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:18:50.813 17:35:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:50.813 17:35:17 -- host/auth.sh@68 -- # digest=sha256 00:18:50.813 17:35:17 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:50.813 17:35:17 -- host/auth.sh@68 -- # keyid=1 00:18:50.813 17:35:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:50.813 17:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.813 17:35:17 -- common/autotest_common.sh@10 -- # set +x 00:18:50.813 17:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:50.813 17:35:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:50.813 17:35:17 -- nvmf/common.sh@717 -- # local ip 00:18:50.813 17:35:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:50.813 17:35:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:50.813 17:35:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.813 17:35:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.813 17:35:17 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:50.813 17:35:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:50.813 17:35:17 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:50.813 17:35:17 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:50.813 17:35:17 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:50.813 17:35:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:50.813 17:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:50.813 17:35:17 -- common/autotest_common.sh@10 -- # set +x 00:18:51.418 nvme0n1 00:18:51.418 17:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.418 17:35:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.418 17:35:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:51.418 17:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.418 17:35:17 -- common/autotest_common.sh@10 -- # set +x 00:18:51.418 17:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.418 17:35:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.418 17:35:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.418 17:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.418 17:35:17 -- common/autotest_common.sh@10 -- # set +x 00:18:51.418 17:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.418 17:35:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:51.418 17:35:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:51.418 17:35:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:51.418 17:35:17 -- host/auth.sh@44 -- # digest=sha256 00:18:51.418 17:35:17 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:51.418 17:35:17 -- host/auth.sh@44 -- # keyid=2 00:18:51.418 17:35:17 -- host/auth.sh@45 -- # key=DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:18:51.418 17:35:17 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:51.418 17:35:17 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:51.418 17:35:17 -- host/auth.sh@49 -- # echo DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:18:51.418 17:35:17 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:18:51.418 17:35:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:51.418 17:35:17 -- host/auth.sh@68 -- # digest=sha256 00:18:51.418 17:35:17 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:51.418 17:35:17 -- host/auth.sh@68 -- # keyid=2 00:18:51.418 17:35:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:51.418 17:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.418 17:35:17 -- common/autotest_common.sh@10 -- # set +x 00:18:51.678 17:35:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.678 17:35:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:51.678 17:35:17 -- nvmf/common.sh@717 -- # local ip 00:18:51.678 17:35:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:51.678 17:35:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:51.678 17:35:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.678 17:35:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.678 17:35:17 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:51.678 17:35:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:51.678 17:35:17 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:51.678 17:35:17 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:51.678 17:35:17 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:51.678 17:35:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:51.678 17:35:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.678 17:35:17 -- common/autotest_common.sh@10 -- # set +x 00:18:51.937 nvme0n1 00:18:51.937 17:35:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.937 17:35:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.937 17:35:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:51.937 17:35:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.937 17:35:18 -- common/autotest_common.sh@10 -- # set +x 00:18:52.196 17:35:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.196 17:35:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.196 17:35:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.196 17:35:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.196 17:35:18 -- common/autotest_common.sh@10 -- # set +x 00:18:52.196 17:35:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.196 17:35:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:52.196 17:35:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:52.196 17:35:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:52.196 17:35:18 -- host/auth.sh@44 -- # digest=sha256 00:18:52.196 17:35:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:52.196 17:35:18 -- host/auth.sh@44 -- # keyid=3 00:18:52.196 17:35:18 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:18:52.196 17:35:18 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:52.196 17:35:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:52.196 17:35:18 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:18:52.196 17:35:18 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:18:52.196 17:35:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:52.196 17:35:18 -- host/auth.sh@68 -- # digest=sha256 00:18:52.196 17:35:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:52.196 17:35:18 -- host/auth.sh@68 -- # keyid=3 00:18:52.196 17:35:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:52.196 17:35:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.196 17:35:18 -- common/autotest_common.sh@10 -- # set +x 00:18:52.196 17:35:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.196 17:35:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:52.196 17:35:18 -- nvmf/common.sh@717 -- # local ip 00:18:52.196 17:35:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:52.196 17:35:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:52.196 17:35:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.196 17:35:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.196 17:35:18 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:52.196 17:35:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:52.196 17:35:18 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:52.196 17:35:18 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:52.196 17:35:18 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:52.196 17:35:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:52.196 17:35:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.196 17:35:18 -- common/autotest_common.sh@10 -- # set +x 00:18:52.766 nvme0n1 00:18:52.766 17:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.766 17:35:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.766 17:35:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:52.766 17:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.766 17:35:19 -- common/autotest_common.sh@10 -- # set +x 00:18:52.766 17:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.766 17:35:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.766 17:35:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.766 17:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.766 17:35:19 -- common/autotest_common.sh@10 -- # set +x 00:18:52.767 17:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.767 17:35:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:52.767 17:35:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:52.767 17:35:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:52.767 17:35:19 -- host/auth.sh@44 -- # digest=sha256 00:18:52.767 17:35:19 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:52.767 17:35:19 -- host/auth.sh@44 -- # keyid=4 00:18:52.767 17:35:19 -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:18:52.767 17:35:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:52.767 17:35:19 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:52.767 17:35:19 -- host/auth.sh@49 -- # echo DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:18:52.767 17:35:19 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:18:52.767 17:35:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:52.767 17:35:19 -- host/auth.sh@68 -- # digest=sha256 00:18:52.767 17:35:19 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:52.767 17:35:19 -- host/auth.sh@68 -- # keyid=4 00:18:52.767 17:35:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:52.767 17:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.767 17:35:19 -- common/autotest_common.sh@10 -- # set +x 00:18:52.767 17:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.767 17:35:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:52.767 17:35:19 -- nvmf/common.sh@717 -- # local ip 00:18:52.767 17:35:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:52.767 17:35:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:52.767 17:35:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.767 17:35:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.767 17:35:19 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:52.767 17:35:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:52.767 17:35:19 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:52.767 17:35:19 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:52.767 17:35:19 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:52.767 17:35:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:52.767 17:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.767 17:35:19 -- common/autotest_common.sh@10 -- # set +x 00:18:53.334 nvme0n1 00:18:53.334 17:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.334 17:35:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.334 17:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.334 17:35:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:53.334 17:35:19 -- common/autotest_common.sh@10 -- # set +x 00:18:53.334 17:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.334 17:35:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.334 17:35:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.334 17:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.334 17:35:19 -- common/autotest_common.sh@10 -- # set +x 00:18:53.334 17:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.334 17:35:19 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.334 17:35:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:53.334 17:35:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:53.334 17:35:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:53.334 17:35:19 -- host/auth.sh@44 -- # digest=sha256 00:18:53.334 17:35:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:53.334 17:35:19 -- host/auth.sh@44 -- # keyid=0 00:18:53.334 17:35:19 -- host/auth.sh@45 -- # key=DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:18:53.334 17:35:19 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:53.334 17:35:19 -- host/auth.sh@48 -- # echo ffdhe8192 00:18:53.334 17:35:19 -- host/auth.sh@49 -- # echo DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:18:53.334 17:35:19 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:18:53.334 17:35:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:53.334 17:35:19 -- host/auth.sh@68 -- # digest=sha256 00:18:53.334 17:35:19 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:18:53.334 17:35:19 -- host/auth.sh@68 -- # keyid=0 00:18:53.334 17:35:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:53.334 17:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.334 17:35:19 -- common/autotest_common.sh@10 -- # set +x 00:18:53.334 17:35:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.334 17:35:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:53.334 17:35:19 -- nvmf/common.sh@717 -- # local ip 00:18:53.334 17:35:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:53.334 17:35:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:53.334 17:35:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.334 17:35:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.334 17:35:19 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:53.334 17:35:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:53.334 17:35:19 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:53.334 17:35:19 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:53.334 17:35:19 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:53.334 17:35:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:53.334 17:35:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.334 17:35:19 -- common/autotest_common.sh@10 -- # set +x 00:18:54.278 nvme0n1 00:18:54.278 17:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.278 17:35:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.278 17:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.278 17:35:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:54.278 17:35:20 -- common/autotest_common.sh@10 -- # set +x 00:18:54.537 17:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.537 17:35:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.537 17:35:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.537 17:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.537 17:35:20 -- common/autotest_common.sh@10 -- # set +x 00:18:54.537 17:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.537 17:35:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:54.537 17:35:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:18:54.537 17:35:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:54.537 17:35:20 -- host/auth.sh@44 -- # digest=sha256 00:18:54.537 17:35:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:54.537 17:35:20 -- host/auth.sh@44 -- # keyid=1 00:18:54.537 17:35:20 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:18:54.537 17:35:20 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:54.537 17:35:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:18:54.537 17:35:20 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:18:54.537 17:35:20 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:18:54.537 17:35:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:54.537 17:35:20 -- host/auth.sh@68 -- # digest=sha256 00:18:54.537 17:35:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:18:54.537 17:35:20 -- host/auth.sh@68 -- # keyid=1 00:18:54.537 17:35:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:54.537 17:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.537 17:35:20 -- common/autotest_common.sh@10 -- # set +x 00:18:54.537 17:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.537 17:35:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:54.537 17:35:20 -- nvmf/common.sh@717 -- # local ip 00:18:54.537 17:35:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:54.537 17:35:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:54.537 17:35:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.537 17:35:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.537 17:35:20 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:54.537 17:35:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:54.537 17:35:20 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:54.537 17:35:20 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:54.537 17:35:20 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:54.537 17:35:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:54.537 17:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.537 17:35:20 -- common/autotest_common.sh@10 -- # set +x 00:18:55.469 nvme0n1 00:18:55.470 17:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.470 17:35:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.470 17:35:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:55.470 17:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.470 17:35:21 -- common/autotest_common.sh@10 -- # set +x 00:18:55.470 17:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.470 17:35:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.470 17:35:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.470 17:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.470 17:35:21 -- common/autotest_common.sh@10 -- # set +x 00:18:55.470 17:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.470 17:35:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:55.470 17:35:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:18:55.470 17:35:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:55.470 17:35:21 -- host/auth.sh@44 -- # digest=sha256 00:18:55.470 17:35:21 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:55.470 17:35:21 -- host/auth.sh@44 -- # keyid=2 00:18:55.470 17:35:21 -- host/auth.sh@45 -- # key=DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:18:55.470 17:35:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:55.470 17:35:21 -- host/auth.sh@48 -- # echo ffdhe8192 00:18:55.470 17:35:21 -- host/auth.sh@49 -- # echo DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:18:55.470 17:35:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:18:55.470 17:35:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:55.470 17:35:21 -- host/auth.sh@68 -- # digest=sha256 00:18:55.470 17:35:21 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:18:55.470 17:35:21 -- host/auth.sh@68 -- # keyid=2 00:18:55.470 17:35:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:55.470 17:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.470 17:35:21 -- common/autotest_common.sh@10 -- # set +x 00:18:55.470 17:35:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.470 17:35:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:55.470 17:35:21 -- nvmf/common.sh@717 -- # local ip 00:18:55.470 17:35:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:55.470 17:35:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:55.470 17:35:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.470 17:35:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.470 17:35:21 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:55.470 17:35:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:55.470 17:35:21 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:55.470 17:35:21 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:55.470 17:35:21 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:55.470 17:35:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:55.470 17:35:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.470 17:35:21 -- common/autotest_common.sh@10 -- # set +x 00:18:56.451 nvme0n1 00:18:56.451 17:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.451 17:35:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.451 17:35:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.451 17:35:22 -- common/autotest_common.sh@10 -- # set +x 00:18:56.451 17:35:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:56.711 17:35:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.711 17:35:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.711 17:35:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.711 17:35:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.711 17:35:23 -- common/autotest_common.sh@10 -- # set +x 00:18:56.711 17:35:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.711 17:35:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:56.711 17:35:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:56.711 17:35:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:56.711 17:35:23 -- host/auth.sh@44 -- # digest=sha256 00:18:56.711 17:35:23 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:56.711 17:35:23 -- host/auth.sh@44 -- # keyid=3 00:18:56.711 17:35:23 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:18:56.711 17:35:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:56.711 17:35:23 -- host/auth.sh@48 -- # echo ffdhe8192 00:18:56.711 17:35:23 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:18:56.711 17:35:23 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:18:56.711 17:35:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:56.711 17:35:23 -- host/auth.sh@68 -- # digest=sha256 00:18:56.711 17:35:23 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:18:56.711 17:35:23 -- host/auth.sh@68 -- # keyid=3 00:18:56.711 17:35:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:56.711 17:35:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.711 17:35:23 -- common/autotest_common.sh@10 -- # set +x 00:18:56.711 17:35:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.711 17:35:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:56.711 17:35:23 -- nvmf/common.sh@717 -- # local ip 00:18:56.711 17:35:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:56.711 17:35:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:56.711 17:35:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.711 17:35:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.711 17:35:23 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:56.711 17:35:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:56.711 17:35:23 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:56.711 17:35:23 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:56.711 17:35:23 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:56.711 17:35:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:56.711 17:35:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.711 17:35:23 -- common/autotest_common.sh@10 -- # set +x 00:18:57.648 nvme0n1 00:18:57.648 17:35:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.648 17:35:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:57.648 17:35:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.648 17:35:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.648 17:35:23 -- common/autotest_common.sh@10 -- # set +x 00:18:57.648 17:35:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.648 17:35:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.648 17:35:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.648 17:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.648 17:35:24 -- common/autotest_common.sh@10 -- # set +x 00:18:57.648 17:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.648 17:35:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:57.648 17:35:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:57.648 17:35:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:57.648 17:35:24 -- host/auth.sh@44 -- # digest=sha256 00:18:57.648 17:35:24 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:57.648 17:35:24 -- host/auth.sh@44 -- # keyid=4 00:18:57.648 17:35:24 -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:18:57.648 17:35:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:57.648 17:35:24 -- host/auth.sh@48 -- # echo ffdhe8192 00:18:57.648 17:35:24 -- host/auth.sh@49 -- # echo DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:18:57.648 17:35:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:18:57.648 17:35:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:57.648 17:35:24 -- host/auth.sh@68 -- # digest=sha256 00:18:57.648 17:35:24 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:18:57.648 17:35:24 -- host/auth.sh@68 -- # keyid=4 00:18:57.648 17:35:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:57.648 17:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.648 17:35:24 -- common/autotest_common.sh@10 -- # set +x 00:18:57.648 17:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.648 17:35:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:57.648 17:35:24 -- nvmf/common.sh@717 -- # local ip 00:18:57.648 17:35:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:57.648 17:35:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:57.648 17:35:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.648 17:35:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.648 17:35:24 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:57.648 17:35:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:57.648 17:35:24 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:57.648 17:35:24 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:57.648 17:35:24 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:57.648 17:35:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:57.648 17:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.648 17:35:24 -- common/autotest_common.sh@10 -- # set +x 00:18:58.588 nvme0n1 00:18:58.588 17:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.588 17:35:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.588 17:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.588 17:35:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:58.588 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:58.588 17:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.588 17:35:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.588 17:35:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.588 17:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.588 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:58.588 17:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.588 17:35:25 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:18:58.588 17:35:25 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.588 17:35:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:58.588 17:35:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:58.588 17:35:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:58.588 17:35:25 -- host/auth.sh@44 -- # digest=sha384 00:18:58.588 17:35:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:58.588 17:35:25 -- host/auth.sh@44 -- # keyid=0 00:18:58.588 17:35:25 -- host/auth.sh@45 -- # key=DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:18:58.588 17:35:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:18:58.588 17:35:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:58.588 17:35:25 -- host/auth.sh@49 -- # echo DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:18:58.588 17:35:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:18:58.588 17:35:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:58.588 17:35:25 -- host/auth.sh@68 -- # digest=sha384 00:18:58.588 17:35:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:58.588 17:35:25 -- host/auth.sh@68 -- # keyid=0 00:18:58.588 17:35:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:58.588 17:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.588 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:58.847 17:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.847 17:35:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:58.847 17:35:25 -- nvmf/common.sh@717 -- # local ip 00:18:58.847 17:35:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:58.847 17:35:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:58.847 17:35:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.847 17:35:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.847 17:35:25 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:58.847 17:35:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:58.847 17:35:25 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:58.847 17:35:25 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:58.847 17:35:25 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:58.847 17:35:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:58.847 17:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.847 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:58.847 nvme0n1 00:18:58.847 17:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.847 17:35:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.847 17:35:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:58.847 17:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.847 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:58.847 17:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.847 17:35:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.847 17:35:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.847 17:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.847 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:58.847 17:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.847 17:35:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:58.847 17:35:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:58.847 17:35:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:58.847 17:35:25 -- host/auth.sh@44 -- # digest=sha384 00:18:58.847 17:35:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:58.847 17:35:25 -- host/auth.sh@44 -- # keyid=1 00:18:58.847 17:35:25 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:18:58.847 17:35:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:18:58.847 17:35:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:58.847 17:35:25 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:18:58.847 17:35:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:18:58.847 17:35:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:58.847 17:35:25 -- host/auth.sh@68 -- # digest=sha384 00:18:58.847 17:35:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:58.847 17:35:25 -- host/auth.sh@68 -- # keyid=1 00:18:58.847 17:35:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:58.847 17:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.847 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:58.847 17:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.107 17:35:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:59.107 17:35:25 -- nvmf/common.sh@717 -- # local ip 00:18:59.107 17:35:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:59.107 17:35:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:59.107 17:35:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.107 17:35:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.107 17:35:25 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:59.107 17:35:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:59.107 17:35:25 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:59.107 17:35:25 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:59.107 17:35:25 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:59.107 17:35:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:59.107 17:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.107 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:59.107 nvme0n1 00:18:59.107 17:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.107 17:35:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.107 17:35:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:59.107 17:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.107 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:59.107 17:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.107 17:35:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.107 17:35:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.107 17:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.107 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:59.107 17:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.107 17:35:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:59.107 17:35:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:59.107 17:35:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:59.107 17:35:25 -- host/auth.sh@44 -- # digest=sha384 00:18:59.107 17:35:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:59.107 17:35:25 -- host/auth.sh@44 -- # keyid=2 00:18:59.107 17:35:25 -- host/auth.sh@45 -- # key=DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:18:59.107 17:35:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:18:59.107 17:35:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:59.107 17:35:25 -- host/auth.sh@49 -- # echo DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:18:59.107 17:35:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:18:59.107 17:35:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:59.107 17:35:25 -- host/auth.sh@68 -- # digest=sha384 00:18:59.107 17:35:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:59.107 17:35:25 -- host/auth.sh@68 -- # keyid=2 00:18:59.107 17:35:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:59.107 17:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.107 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:59.367 17:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.367 17:35:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:59.367 17:35:25 -- nvmf/common.sh@717 -- # local ip 00:18:59.367 17:35:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:59.367 17:35:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:59.367 17:35:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.367 17:35:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.367 17:35:25 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:59.367 17:35:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:59.367 17:35:25 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:59.367 17:35:25 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:59.367 17:35:25 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:59.367 17:35:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:59.367 17:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.367 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:59.367 nvme0n1 00:18:59.367 17:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.367 17:35:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.367 17:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.367 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:59.367 17:35:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:59.367 17:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.367 17:35:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.367 17:35:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.367 17:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.367 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:59.367 17:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.367 17:35:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:59.367 17:35:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:59.367 17:35:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:59.367 17:35:25 -- host/auth.sh@44 -- # digest=sha384 00:18:59.367 17:35:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:59.367 17:35:25 -- host/auth.sh@44 -- # keyid=3 00:18:59.367 17:35:25 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:18:59.367 17:35:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:18:59.367 17:35:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:59.367 17:35:25 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:18:59.367 17:35:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:18:59.367 17:35:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:59.367 17:35:25 -- host/auth.sh@68 -- # digest=sha384 00:18:59.367 17:35:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:59.367 17:35:25 -- host/auth.sh@68 -- # keyid=3 00:18:59.367 17:35:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:59.367 17:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.367 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:59.628 17:35:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.628 17:35:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:59.628 17:35:25 -- nvmf/common.sh@717 -- # local ip 00:18:59.628 17:35:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:59.628 17:35:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:59.628 17:35:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.628 17:35:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.628 17:35:25 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:59.628 17:35:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:59.628 17:35:25 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:59.628 17:35:25 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:59.628 17:35:25 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:59.628 17:35:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:59.628 17:35:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.628 17:35:25 -- common/autotest_common.sh@10 -- # set +x 00:18:59.628 nvme0n1 00:18:59.628 17:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.628 17:35:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.628 17:35:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:59.628 17:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.628 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:18:59.628 17:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.628 17:35:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.628 17:35:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.628 17:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.628 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:18:59.628 17:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.628 17:35:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:59.628 17:35:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:59.628 17:35:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:59.628 17:35:26 -- host/auth.sh@44 -- # digest=sha384 00:18:59.628 17:35:26 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:59.628 17:35:26 -- host/auth.sh@44 -- # keyid=4 00:18:59.628 17:35:26 -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:18:59.628 17:35:26 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:18:59.628 17:35:26 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:59.628 17:35:26 -- host/auth.sh@49 -- # echo DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:18:59.628 17:35:26 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:18:59.628 17:35:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:59.628 17:35:26 -- host/auth.sh@68 -- # digest=sha384 00:18:59.628 17:35:26 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:59.628 17:35:26 -- host/auth.sh@68 -- # keyid=4 00:18:59.628 17:35:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:59.628 17:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.628 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:18:59.628 17:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.628 17:35:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:59.628 17:35:26 -- nvmf/common.sh@717 -- # local ip 00:18:59.628 17:35:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:59.628 17:35:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:59.628 17:35:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.628 17:35:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.628 17:35:26 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:59.628 17:35:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:59.628 17:35:26 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:59.628 17:35:26 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:59.628 17:35:26 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:59.628 17:35:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:59.628 17:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.628 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:18:59.887 nvme0n1 00:18:59.887 17:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.887 17:35:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.887 17:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.887 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:18:59.887 17:35:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:59.887 17:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.887 17:35:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.887 17:35:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.887 17:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.887 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:18:59.887 17:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.887 17:35:26 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.887 17:35:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:59.887 17:35:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:59.887 17:35:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:59.887 17:35:26 -- host/auth.sh@44 -- # digest=sha384 00:18:59.887 17:35:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:59.887 17:35:26 -- host/auth.sh@44 -- # keyid=0 00:18:59.887 17:35:26 -- host/auth.sh@45 -- # key=DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:18:59.887 17:35:26 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:18:59.887 17:35:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:59.887 17:35:26 -- host/auth.sh@49 -- # echo DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:18:59.887 17:35:26 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:18:59.887 17:35:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:59.887 17:35:26 -- host/auth.sh@68 -- # digest=sha384 00:18:59.887 17:35:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:59.887 17:35:26 -- host/auth.sh@68 -- # keyid=0 00:18:59.887 17:35:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:59.887 17:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.887 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:18:59.887 17:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.887 17:35:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:59.887 17:35:26 -- nvmf/common.sh@717 -- # local ip 00:18:59.887 17:35:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:59.887 17:35:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:59.887 17:35:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.887 17:35:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.887 17:35:26 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:18:59.887 17:35:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:18:59.887 17:35:26 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:18:59.887 17:35:26 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:18:59.887 17:35:26 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:18:59.887 17:35:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:59.887 17:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.887 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:19:00.148 nvme0n1 00:19:00.148 17:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.148 17:35:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.148 17:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.148 17:35:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:00.148 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:19:00.148 17:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.148 17:35:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.148 17:35:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.148 17:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.148 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:19:00.148 17:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.148 17:35:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:00.148 17:35:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:00.148 17:35:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:00.148 17:35:26 -- host/auth.sh@44 -- # digest=sha384 00:19:00.148 17:35:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:00.148 17:35:26 -- host/auth.sh@44 -- # keyid=1 00:19:00.148 17:35:26 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:00.148 17:35:26 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:00.148 17:35:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:00.148 17:35:26 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:00.148 17:35:26 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:19:00.408 17:35:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:00.408 17:35:26 -- host/auth.sh@68 -- # digest=sha384 00:19:00.408 17:35:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:00.408 17:35:26 -- host/auth.sh@68 -- # keyid=1 00:19:00.408 17:35:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:00.408 17:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.408 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:19:00.408 17:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.408 17:35:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:00.408 17:35:26 -- nvmf/common.sh@717 -- # local ip 00:19:00.408 17:35:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:00.408 17:35:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:00.408 17:35:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.408 17:35:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.408 17:35:26 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:00.408 17:35:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:00.408 17:35:26 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:00.408 17:35:26 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:00.408 17:35:26 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:00.408 17:35:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:00.408 17:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.408 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:19:00.408 nvme0n1 00:19:00.408 17:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.408 17:35:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.408 17:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.408 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:19:00.408 17:35:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:00.408 17:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.408 17:35:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.408 17:35:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.667 17:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.667 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:19:00.667 17:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.667 17:35:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:00.667 17:35:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:00.667 17:35:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:00.667 17:35:26 -- host/auth.sh@44 -- # digest=sha384 00:19:00.667 17:35:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:00.667 17:35:26 -- host/auth.sh@44 -- # keyid=2 00:19:00.667 17:35:26 -- host/auth.sh@45 -- # key=DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:00.667 17:35:26 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:00.667 17:35:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:00.667 17:35:26 -- host/auth.sh@49 -- # echo DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:00.667 17:35:26 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:19:00.667 17:35:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:00.667 17:35:26 -- host/auth.sh@68 -- # digest=sha384 00:19:00.667 17:35:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:00.667 17:35:26 -- host/auth.sh@68 -- # keyid=2 00:19:00.667 17:35:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:00.667 17:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.667 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:19:00.667 17:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.667 17:35:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:00.667 17:35:26 -- nvmf/common.sh@717 -- # local ip 00:19:00.667 17:35:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:00.667 17:35:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:00.667 17:35:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.667 17:35:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.667 17:35:26 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:00.667 17:35:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:00.667 17:35:26 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:00.667 17:35:26 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:00.667 17:35:26 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:00.667 17:35:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:00.667 17:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.667 17:35:26 -- common/autotest_common.sh@10 -- # set +x 00:19:00.667 nvme0n1 00:19:00.667 17:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.667 17:35:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.667 17:35:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:00.667 17:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.667 17:35:27 -- common/autotest_common.sh@10 -- # set +x 00:19:00.924 17:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.924 17:35:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.924 17:35:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.924 17:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.924 17:35:27 -- common/autotest_common.sh@10 -- # set +x 00:19:00.924 17:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.924 17:35:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:00.924 17:35:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:00.924 17:35:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:00.924 17:35:27 -- host/auth.sh@44 -- # digest=sha384 00:19:00.924 17:35:27 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:00.924 17:35:27 -- host/auth.sh@44 -- # keyid=3 00:19:00.924 17:35:27 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:00.924 17:35:27 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:00.924 17:35:27 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:00.924 17:35:27 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:00.924 17:35:27 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:19:00.924 17:35:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:00.924 17:35:27 -- host/auth.sh@68 -- # digest=sha384 00:19:00.924 17:35:27 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:00.924 17:35:27 -- host/auth.sh@68 -- # keyid=3 00:19:00.924 17:35:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:00.925 17:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.925 17:35:27 -- common/autotest_common.sh@10 -- # set +x 00:19:00.925 17:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.925 17:35:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:00.925 17:35:27 -- nvmf/common.sh@717 -- # local ip 00:19:00.925 17:35:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:00.925 17:35:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:00.925 17:35:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.925 17:35:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.925 17:35:27 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:00.925 17:35:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:00.925 17:35:27 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:00.925 17:35:27 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:00.925 17:35:27 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:00.925 17:35:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:00.925 17:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.925 17:35:27 -- common/autotest_common.sh@10 -- # set +x 00:19:01.184 nvme0n1 00:19:01.184 17:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.184 17:35:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.184 17:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.184 17:35:27 -- common/autotest_common.sh@10 -- # set +x 00:19:01.184 17:35:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:01.184 17:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.184 17:35:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.184 17:35:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.184 17:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.184 17:35:27 -- common/autotest_common.sh@10 -- # set +x 00:19:01.184 17:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.184 17:35:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:01.184 17:35:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:01.184 17:35:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:01.184 17:35:27 -- host/auth.sh@44 -- # digest=sha384 00:19:01.184 17:35:27 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:01.184 17:35:27 -- host/auth.sh@44 -- # keyid=4 00:19:01.184 17:35:27 -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:01.184 17:35:27 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:01.184 17:35:27 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:01.184 17:35:27 -- host/auth.sh@49 -- # echo DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:01.184 17:35:27 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:19:01.184 17:35:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:01.184 17:35:27 -- host/auth.sh@68 -- # digest=sha384 00:19:01.184 17:35:27 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:01.184 17:35:27 -- host/auth.sh@68 -- # keyid=4 00:19:01.184 17:35:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:01.184 17:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.184 17:35:27 -- common/autotest_common.sh@10 -- # set +x 00:19:01.184 17:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.184 17:35:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:01.184 17:35:27 -- nvmf/common.sh@717 -- # local ip 00:19:01.184 17:35:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:01.184 17:35:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:01.185 17:35:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.185 17:35:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.185 17:35:27 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:01.185 17:35:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:01.185 17:35:27 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:01.185 17:35:27 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:01.185 17:35:27 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:01.185 17:35:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:01.185 17:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.185 17:35:27 -- common/autotest_common.sh@10 -- # set +x 00:19:01.444 nvme0n1 00:19:01.444 17:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.444 17:35:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.444 17:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.444 17:35:27 -- common/autotest_common.sh@10 -- # set +x 00:19:01.444 17:35:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:01.444 17:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.444 17:35:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.444 17:35:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.444 17:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.444 17:35:27 -- common/autotest_common.sh@10 -- # set +x 00:19:01.444 17:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.444 17:35:27 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.444 17:35:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:01.444 17:35:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:01.444 17:35:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:01.444 17:35:27 -- host/auth.sh@44 -- # digest=sha384 00:19:01.444 17:35:27 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:01.444 17:35:27 -- host/auth.sh@44 -- # keyid=0 00:19:01.444 17:35:27 -- host/auth.sh@45 -- # key=DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:19:01.444 17:35:27 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:01.444 17:35:27 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:01.444 17:35:27 -- host/auth.sh@49 -- # echo DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:19:01.444 17:35:27 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:19:01.444 17:35:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:01.444 17:35:27 -- host/auth.sh@68 -- # digest=sha384 00:19:01.444 17:35:27 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:01.444 17:35:27 -- host/auth.sh@68 -- # keyid=0 00:19:01.444 17:35:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:01.444 17:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.444 17:35:27 -- common/autotest_common.sh@10 -- # set +x 00:19:01.445 17:35:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.445 17:35:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:01.445 17:35:27 -- nvmf/common.sh@717 -- # local ip 00:19:01.445 17:35:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:01.445 17:35:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:01.445 17:35:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.445 17:35:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.445 17:35:27 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:01.445 17:35:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:01.445 17:35:27 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:01.445 17:35:27 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:01.445 17:35:27 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:01.445 17:35:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:01.445 17:35:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.445 17:35:27 -- common/autotest_common.sh@10 -- # set +x 00:19:01.705 nvme0n1 00:19:01.705 17:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.705 17:35:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.705 17:35:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:01.705 17:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.705 17:35:28 -- common/autotest_common.sh@10 -- # set +x 00:19:01.705 17:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.705 17:35:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.705 17:35:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.705 17:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.705 17:35:28 -- common/autotest_common.sh@10 -- # set +x 00:19:01.705 17:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.705 17:35:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:01.705 17:35:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:01.705 17:35:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:01.705 17:35:28 -- host/auth.sh@44 -- # digest=sha384 00:19:01.705 17:35:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:01.705 17:35:28 -- host/auth.sh@44 -- # keyid=1 00:19:01.705 17:35:28 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:01.705 17:35:28 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:01.705 17:35:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:01.705 17:35:28 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:01.705 17:35:28 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:19:01.705 17:35:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:01.705 17:35:28 -- host/auth.sh@68 -- # digest=sha384 00:19:01.705 17:35:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:01.705 17:35:28 -- host/auth.sh@68 -- # keyid=1 00:19:01.705 17:35:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:01.705 17:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.705 17:35:28 -- common/autotest_common.sh@10 -- # set +x 00:19:01.705 17:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.705 17:35:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:01.705 17:35:28 -- nvmf/common.sh@717 -- # local ip 00:19:01.705 17:35:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:01.705 17:35:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:01.705 17:35:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.705 17:35:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.705 17:35:28 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:01.705 17:35:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:01.705 17:35:28 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:01.705 17:35:28 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:01.705 17:35:28 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:01.705 17:35:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:01.705 17:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.705 17:35:28 -- common/autotest_common.sh@10 -- # set +x 00:19:02.275 nvme0n1 00:19:02.275 17:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.275 17:35:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.275 17:35:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:02.275 17:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.275 17:35:28 -- common/autotest_common.sh@10 -- # set +x 00:19:02.275 17:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.275 17:35:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.275 17:35:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.275 17:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.275 17:35:28 -- common/autotest_common.sh@10 -- # set +x 00:19:02.275 17:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.275 17:35:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:02.275 17:35:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:02.275 17:35:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:02.275 17:35:28 -- host/auth.sh@44 -- # digest=sha384 00:19:02.275 17:35:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:02.275 17:35:28 -- host/auth.sh@44 -- # keyid=2 00:19:02.275 17:35:28 -- host/auth.sh@45 -- # key=DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:02.275 17:35:28 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:02.275 17:35:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:02.275 17:35:28 -- host/auth.sh@49 -- # echo DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:02.275 17:35:28 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:19:02.275 17:35:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:02.275 17:35:28 -- host/auth.sh@68 -- # digest=sha384 00:19:02.275 17:35:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:02.275 17:35:28 -- host/auth.sh@68 -- # keyid=2 00:19:02.275 17:35:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:02.275 17:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.275 17:35:28 -- common/autotest_common.sh@10 -- # set +x 00:19:02.275 17:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.275 17:35:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:02.275 17:35:28 -- nvmf/common.sh@717 -- # local ip 00:19:02.275 17:35:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:02.275 17:35:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:02.275 17:35:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.275 17:35:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.275 17:35:28 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:02.275 17:35:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:02.275 17:35:28 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:02.275 17:35:28 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:02.275 17:35:28 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:02.275 17:35:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:02.275 17:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.275 17:35:28 -- common/autotest_common.sh@10 -- # set +x 00:19:02.535 nvme0n1 00:19:02.535 17:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.535 17:35:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.535 17:35:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:02.535 17:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.535 17:35:28 -- common/autotest_common.sh@10 -- # set +x 00:19:02.535 17:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.535 17:35:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.535 17:35:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.535 17:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.535 17:35:28 -- common/autotest_common.sh@10 -- # set +x 00:19:02.535 17:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.535 17:35:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:02.535 17:35:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:02.535 17:35:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:02.535 17:35:28 -- host/auth.sh@44 -- # digest=sha384 00:19:02.535 17:35:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:02.535 17:35:28 -- host/auth.sh@44 -- # keyid=3 00:19:02.535 17:35:28 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:02.535 17:35:28 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:02.535 17:35:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:02.535 17:35:28 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:02.535 17:35:28 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:19:02.535 17:35:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:02.535 17:35:28 -- host/auth.sh@68 -- # digest=sha384 00:19:02.535 17:35:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:02.535 17:35:28 -- host/auth.sh@68 -- # keyid=3 00:19:02.535 17:35:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:02.535 17:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.535 17:35:28 -- common/autotest_common.sh@10 -- # set +x 00:19:02.535 17:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.535 17:35:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:02.535 17:35:28 -- nvmf/common.sh@717 -- # local ip 00:19:02.535 17:35:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:02.535 17:35:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:02.535 17:35:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.535 17:35:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.535 17:35:28 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:02.535 17:35:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:02.535 17:35:28 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:02.535 17:35:28 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:02.535 17:35:28 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:02.535 17:35:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:02.535 17:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.535 17:35:29 -- common/autotest_common.sh@10 -- # set +x 00:19:02.793 nvme0n1 00:19:02.793 17:35:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.793 17:35:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.793 17:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.793 17:35:29 -- common/autotest_common.sh@10 -- # set +x 00:19:02.794 17:35:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:02.794 17:35:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.053 17:35:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.053 17:35:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.053 17:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.053 17:35:29 -- common/autotest_common.sh@10 -- # set +x 00:19:03.053 17:35:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.053 17:35:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:03.053 17:35:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:03.053 17:35:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:03.053 17:35:29 -- host/auth.sh@44 -- # digest=sha384 00:19:03.053 17:35:29 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:03.053 17:35:29 -- host/auth.sh@44 -- # keyid=4 00:19:03.053 17:35:29 -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:03.053 17:35:29 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:03.053 17:35:29 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:03.053 17:35:29 -- host/auth.sh@49 -- # echo DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:03.053 17:35:29 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:19:03.053 17:35:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:03.053 17:35:29 -- host/auth.sh@68 -- # digest=sha384 00:19:03.053 17:35:29 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:03.053 17:35:29 -- host/auth.sh@68 -- # keyid=4 00:19:03.053 17:35:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:03.053 17:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.053 17:35:29 -- common/autotest_common.sh@10 -- # set +x 00:19:03.053 17:35:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.053 17:35:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:03.053 17:35:29 -- nvmf/common.sh@717 -- # local ip 00:19:03.053 17:35:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:03.053 17:35:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:03.053 17:35:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.053 17:35:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.053 17:35:29 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:03.053 17:35:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:03.053 17:35:29 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:03.053 17:35:29 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:03.053 17:35:29 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:03.053 17:35:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:03.053 17:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.053 17:35:29 -- common/autotest_common.sh@10 -- # set +x 00:19:03.313 nvme0n1 00:19:03.313 17:35:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.313 17:35:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.313 17:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.313 17:35:29 -- common/autotest_common.sh@10 -- # set +x 00:19:03.313 17:35:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:03.313 17:35:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.313 17:35:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.313 17:35:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.313 17:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.313 17:35:29 -- common/autotest_common.sh@10 -- # set +x 00:19:03.313 17:35:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.313 17:35:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.313 17:35:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:03.313 17:35:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:03.313 17:35:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:03.313 17:35:29 -- host/auth.sh@44 -- # digest=sha384 00:19:03.313 17:35:29 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:03.313 17:35:29 -- host/auth.sh@44 -- # keyid=0 00:19:03.313 17:35:29 -- host/auth.sh@45 -- # key=DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:19:03.313 17:35:29 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:03.313 17:35:29 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:03.313 17:35:29 -- host/auth.sh@49 -- # echo DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:19:03.313 17:35:29 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:19:03.313 17:35:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:03.313 17:35:29 -- host/auth.sh@68 -- # digest=sha384 00:19:03.313 17:35:29 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:03.313 17:35:29 -- host/auth.sh@68 -- # keyid=0 00:19:03.313 17:35:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:03.313 17:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.313 17:35:29 -- common/autotest_common.sh@10 -- # set +x 00:19:03.313 17:35:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.313 17:35:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:03.313 17:35:29 -- nvmf/common.sh@717 -- # local ip 00:19:03.313 17:35:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:03.313 17:35:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:03.313 17:35:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.313 17:35:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.313 17:35:29 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:03.313 17:35:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:03.313 17:35:29 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:03.313 17:35:29 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:03.313 17:35:29 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:03.313 17:35:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:03.313 17:35:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.313 17:35:29 -- common/autotest_common.sh@10 -- # set +x 00:19:03.882 nvme0n1 00:19:03.882 17:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.882 17:35:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.882 17:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.882 17:35:30 -- common/autotest_common.sh@10 -- # set +x 00:19:03.882 17:35:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:03.882 17:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.882 17:35:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.882 17:35:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.882 17:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.882 17:35:30 -- common/autotest_common.sh@10 -- # set +x 00:19:03.882 17:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.882 17:35:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:03.882 17:35:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:03.882 17:35:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:03.882 17:35:30 -- host/auth.sh@44 -- # digest=sha384 00:19:03.882 17:35:30 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:03.882 17:35:30 -- host/auth.sh@44 -- # keyid=1 00:19:03.882 17:35:30 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:03.882 17:35:30 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:03.882 17:35:30 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:03.882 17:35:30 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:03.882 17:35:30 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:19:03.882 17:35:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:03.882 17:35:30 -- host/auth.sh@68 -- # digest=sha384 00:19:03.882 17:35:30 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:03.882 17:35:30 -- host/auth.sh@68 -- # keyid=1 00:19:03.882 17:35:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:03.882 17:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.882 17:35:30 -- common/autotest_common.sh@10 -- # set +x 00:19:03.882 17:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.882 17:35:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:03.882 17:35:30 -- nvmf/common.sh@717 -- # local ip 00:19:03.882 17:35:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:03.882 17:35:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:03.882 17:35:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.882 17:35:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.882 17:35:30 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:03.882 17:35:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:03.882 17:35:30 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:03.882 17:35:30 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:03.882 17:35:30 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:03.882 17:35:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:03.882 17:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.882 17:35:30 -- common/autotest_common.sh@10 -- # set +x 00:19:04.448 nvme0n1 00:19:04.448 17:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.448 17:35:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.448 17:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.448 17:35:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:04.448 17:35:30 -- common/autotest_common.sh@10 -- # set +x 00:19:04.448 17:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.708 17:35:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.708 17:35:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.708 17:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.708 17:35:31 -- common/autotest_common.sh@10 -- # set +x 00:19:04.708 17:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.708 17:35:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:04.708 17:35:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:04.708 17:35:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:04.708 17:35:31 -- host/auth.sh@44 -- # digest=sha384 00:19:04.708 17:35:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:04.708 17:35:31 -- host/auth.sh@44 -- # keyid=2 00:19:04.708 17:35:31 -- host/auth.sh@45 -- # key=DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:04.708 17:35:31 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:04.708 17:35:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:04.708 17:35:31 -- host/auth.sh@49 -- # echo DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:04.708 17:35:31 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:19:04.708 17:35:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:04.708 17:35:31 -- host/auth.sh@68 -- # digest=sha384 00:19:04.708 17:35:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:04.708 17:35:31 -- host/auth.sh@68 -- # keyid=2 00:19:04.708 17:35:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:04.708 17:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.708 17:35:31 -- common/autotest_common.sh@10 -- # set +x 00:19:04.708 17:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.708 17:35:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:04.708 17:35:31 -- nvmf/common.sh@717 -- # local ip 00:19:04.708 17:35:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:04.708 17:35:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:04.708 17:35:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.708 17:35:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.708 17:35:31 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:04.708 17:35:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:04.708 17:35:31 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:04.708 17:35:31 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:04.708 17:35:31 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:04.708 17:35:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:04.708 17:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.708 17:35:31 -- common/autotest_common.sh@10 -- # set +x 00:19:05.279 nvme0n1 00:19:05.279 17:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.279 17:35:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.279 17:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.279 17:35:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:05.279 17:35:31 -- common/autotest_common.sh@10 -- # set +x 00:19:05.279 17:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.279 17:35:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.279 17:35:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.279 17:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.279 17:35:31 -- common/autotest_common.sh@10 -- # set +x 00:19:05.279 17:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.279 17:35:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:05.279 17:35:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:05.279 17:35:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:05.279 17:35:31 -- host/auth.sh@44 -- # digest=sha384 00:19:05.279 17:35:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:05.279 17:35:31 -- host/auth.sh@44 -- # keyid=3 00:19:05.279 17:35:31 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:05.279 17:35:31 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:05.279 17:35:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:05.279 17:35:31 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:05.279 17:35:31 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:19:05.279 17:35:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:05.279 17:35:31 -- host/auth.sh@68 -- # digest=sha384 00:19:05.279 17:35:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:05.279 17:35:31 -- host/auth.sh@68 -- # keyid=3 00:19:05.279 17:35:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:05.279 17:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.279 17:35:31 -- common/autotest_common.sh@10 -- # set +x 00:19:05.279 17:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.279 17:35:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:05.279 17:35:31 -- nvmf/common.sh@717 -- # local ip 00:19:05.279 17:35:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:05.279 17:35:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:05.279 17:35:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.279 17:35:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.279 17:35:31 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:05.279 17:35:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:05.279 17:35:31 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:05.279 17:35:31 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:05.279 17:35:31 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:05.279 17:35:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:05.279 17:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.279 17:35:31 -- common/autotest_common.sh@10 -- # set +x 00:19:05.849 nvme0n1 00:19:05.849 17:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.849 17:35:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.849 17:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.849 17:35:32 -- common/autotest_common.sh@10 -- # set +x 00:19:05.849 17:35:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:05.849 17:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.849 17:35:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.849 17:35:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.849 17:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.849 17:35:32 -- common/autotest_common.sh@10 -- # set +x 00:19:05.849 17:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.849 17:35:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:05.849 17:35:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:05.849 17:35:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:05.849 17:35:32 -- host/auth.sh@44 -- # digest=sha384 00:19:05.849 17:35:32 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:05.849 17:35:32 -- host/auth.sh@44 -- # keyid=4 00:19:05.849 17:35:32 -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:05.849 17:35:32 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:05.849 17:35:32 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:05.849 17:35:32 -- host/auth.sh@49 -- # echo DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:05.849 17:35:32 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:19:05.849 17:35:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:05.849 17:35:32 -- host/auth.sh@68 -- # digest=sha384 00:19:05.849 17:35:32 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:05.849 17:35:32 -- host/auth.sh@68 -- # keyid=4 00:19:05.849 17:35:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:05.849 17:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.849 17:35:32 -- common/autotest_common.sh@10 -- # set +x 00:19:05.849 17:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.849 17:35:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:05.849 17:35:32 -- nvmf/common.sh@717 -- # local ip 00:19:05.849 17:35:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:05.849 17:35:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:05.849 17:35:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.849 17:35:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.849 17:35:32 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:05.849 17:35:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:05.849 17:35:32 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:05.849 17:35:32 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:05.849 17:35:32 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:05.849 17:35:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:05.849 17:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.849 17:35:32 -- common/autotest_common.sh@10 -- # set +x 00:19:06.421 nvme0n1 00:19:06.421 17:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.421 17:35:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.421 17:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.421 17:35:32 -- common/autotest_common.sh@10 -- # set +x 00:19:06.421 17:35:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:06.421 17:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.421 17:35:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.421 17:35:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.421 17:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.421 17:35:32 -- common/autotest_common.sh@10 -- # set +x 00:19:06.421 17:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.421 17:35:32 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.421 17:35:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:06.421 17:35:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:06.421 17:35:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:06.421 17:35:32 -- host/auth.sh@44 -- # digest=sha384 00:19:06.421 17:35:32 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:06.421 17:35:32 -- host/auth.sh@44 -- # keyid=0 00:19:06.421 17:35:32 -- host/auth.sh@45 -- # key=DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:19:06.421 17:35:32 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:06.421 17:35:32 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:06.421 17:35:32 -- host/auth.sh@49 -- # echo DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:19:06.421 17:35:32 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:19:06.421 17:35:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:06.421 17:35:32 -- host/auth.sh@68 -- # digest=sha384 00:19:06.421 17:35:32 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:06.421 17:35:32 -- host/auth.sh@68 -- # keyid=0 00:19:06.421 17:35:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:06.421 17:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.421 17:35:32 -- common/autotest_common.sh@10 -- # set +x 00:19:06.421 17:35:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.421 17:35:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:06.421 17:35:32 -- nvmf/common.sh@717 -- # local ip 00:19:06.421 17:35:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.421 17:35:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.421 17:35:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.421 17:35:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.421 17:35:32 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:06.421 17:35:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:06.421 17:35:32 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:06.421 17:35:32 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:06.421 17:35:32 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:06.421 17:35:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:06.421 17:35:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.421 17:35:32 -- common/autotest_common.sh@10 -- # set +x 00:19:07.797 nvme0n1 00:19:07.797 17:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.797 17:35:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.797 17:35:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:07.797 17:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.797 17:35:33 -- common/autotest_common.sh@10 -- # set +x 00:19:07.797 17:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.797 17:35:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.797 17:35:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.797 17:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.797 17:35:33 -- common/autotest_common.sh@10 -- # set +x 00:19:07.797 17:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.797 17:35:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:07.797 17:35:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:07.797 17:35:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:07.797 17:35:33 -- host/auth.sh@44 -- # digest=sha384 00:19:07.797 17:35:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:07.797 17:35:33 -- host/auth.sh@44 -- # keyid=1 00:19:07.797 17:35:33 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:07.797 17:35:33 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:07.797 17:35:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:07.797 17:35:33 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:07.797 17:35:33 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:19:07.797 17:35:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:07.797 17:35:33 -- host/auth.sh@68 -- # digest=sha384 00:19:07.797 17:35:33 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:07.797 17:35:33 -- host/auth.sh@68 -- # keyid=1 00:19:07.797 17:35:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:07.797 17:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.797 17:35:33 -- common/autotest_common.sh@10 -- # set +x 00:19:07.797 17:35:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.797 17:35:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:07.797 17:35:34 -- nvmf/common.sh@717 -- # local ip 00:19:07.797 17:35:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:07.797 17:35:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:07.797 17:35:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.797 17:35:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.797 17:35:34 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:07.797 17:35:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:07.797 17:35:34 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:07.797 17:35:34 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:07.797 17:35:34 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:07.797 17:35:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:07.797 17:35:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.797 17:35:34 -- common/autotest_common.sh@10 -- # set +x 00:19:08.737 nvme0n1 00:19:08.737 17:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.737 17:35:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.737 17:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.737 17:35:35 -- common/autotest_common.sh@10 -- # set +x 00:19:08.737 17:35:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:08.737 17:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.737 17:35:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.737 17:35:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.737 17:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.737 17:35:35 -- common/autotest_common.sh@10 -- # set +x 00:19:08.737 17:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.737 17:35:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:08.737 17:35:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:08.737 17:35:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:08.737 17:35:35 -- host/auth.sh@44 -- # digest=sha384 00:19:08.737 17:35:35 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:08.737 17:35:35 -- host/auth.sh@44 -- # keyid=2 00:19:08.737 17:35:35 -- host/auth.sh@45 -- # key=DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:08.737 17:35:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:08.737 17:35:35 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:08.737 17:35:35 -- host/auth.sh@49 -- # echo DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:08.737 17:35:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:19:08.737 17:35:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:08.738 17:35:35 -- host/auth.sh@68 -- # digest=sha384 00:19:08.738 17:35:35 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:08.738 17:35:35 -- host/auth.sh@68 -- # keyid=2 00:19:08.738 17:35:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:08.738 17:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.738 17:35:35 -- common/autotest_common.sh@10 -- # set +x 00:19:08.738 17:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.738 17:35:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:08.738 17:35:35 -- nvmf/common.sh@717 -- # local ip 00:19:08.738 17:35:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:08.738 17:35:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:08.738 17:35:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.738 17:35:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.738 17:35:35 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:08.738 17:35:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:08.738 17:35:35 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:08.738 17:35:35 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:08.738 17:35:35 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:08.738 17:35:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:08.738 17:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.738 17:35:35 -- common/autotest_common.sh@10 -- # set +x 00:19:09.677 nvme0n1 00:19:09.677 17:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.677 17:35:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.677 17:35:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:09.677 17:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.677 17:35:36 -- common/autotest_common.sh@10 -- # set +x 00:19:09.677 17:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.677 17:35:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.677 17:35:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.677 17:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.677 17:35:36 -- common/autotest_common.sh@10 -- # set +x 00:19:09.677 17:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.677 17:35:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:09.677 17:35:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:09.677 17:35:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:09.677 17:35:36 -- host/auth.sh@44 -- # digest=sha384 00:19:09.677 17:35:36 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:09.677 17:35:36 -- host/auth.sh@44 -- # keyid=3 00:19:09.677 17:35:36 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:09.677 17:35:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:09.677 17:35:36 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:09.677 17:35:36 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:09.677 17:35:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:19:09.677 17:35:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:09.677 17:35:36 -- host/auth.sh@68 -- # digest=sha384 00:19:09.677 17:35:36 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:09.677 17:35:36 -- host/auth.sh@68 -- # keyid=3 00:19:09.677 17:35:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:09.677 17:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.677 17:35:36 -- common/autotest_common.sh@10 -- # set +x 00:19:09.677 17:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.677 17:35:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:09.677 17:35:36 -- nvmf/common.sh@717 -- # local ip 00:19:09.677 17:35:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:09.677 17:35:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:09.677 17:35:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.677 17:35:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.677 17:35:36 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:09.677 17:35:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:09.677 17:35:36 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:09.677 17:35:36 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:09.677 17:35:36 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:09.677 17:35:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:09.677 17:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.677 17:35:36 -- common/autotest_common.sh@10 -- # set +x 00:19:10.687 nvme0n1 00:19:10.687 17:35:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.687 17:35:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.687 17:35:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:10.687 17:35:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.687 17:35:37 -- common/autotest_common.sh@10 -- # set +x 00:19:10.946 17:35:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.946 17:35:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.946 17:35:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.946 17:35:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.946 17:35:37 -- common/autotest_common.sh@10 -- # set +x 00:19:10.946 17:35:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.946 17:35:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:10.946 17:35:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:10.946 17:35:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.946 17:35:37 -- host/auth.sh@44 -- # digest=sha384 00:19:10.946 17:35:37 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:10.946 17:35:37 -- host/auth.sh@44 -- # keyid=4 00:19:10.946 17:35:37 -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:10.946 17:35:37 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:10.946 17:35:37 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:10.946 17:35:37 -- host/auth.sh@49 -- # echo DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:10.946 17:35:37 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:19:10.946 17:35:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.946 17:35:37 -- host/auth.sh@68 -- # digest=sha384 00:19:10.946 17:35:37 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:10.946 17:35:37 -- host/auth.sh@68 -- # keyid=4 00:19:10.946 17:35:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:10.946 17:35:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.946 17:35:37 -- common/autotest_common.sh@10 -- # set +x 00:19:10.946 17:35:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.946 17:35:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.946 17:35:37 -- nvmf/common.sh@717 -- # local ip 00:19:10.946 17:35:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.946 17:35:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.946 17:35:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.946 17:35:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.946 17:35:37 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:10.946 17:35:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:10.946 17:35:37 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:10.946 17:35:37 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:10.946 17:35:37 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:10.946 17:35:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:10.946 17:35:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.946 17:35:37 -- common/autotest_common.sh@10 -- # set +x 00:19:11.882 nvme0n1 00:19:11.883 17:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.883 17:35:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.883 17:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.883 17:35:38 -- common/autotest_common.sh@10 -- # set +x 00:19:11.883 17:35:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:11.883 17:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.883 17:35:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.883 17:35:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.883 17:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.883 17:35:38 -- common/autotest_common.sh@10 -- # set +x 00:19:11.883 17:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.883 17:35:38 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:11.883 17:35:38 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.883 17:35:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:11.883 17:35:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:11.883 17:35:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:11.883 17:35:38 -- host/auth.sh@44 -- # digest=sha512 00:19:11.883 17:35:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:11.883 17:35:38 -- host/auth.sh@44 -- # keyid=0 00:19:11.883 17:35:38 -- host/auth.sh@45 -- # key=DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:19:11.883 17:35:38 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:11.883 17:35:38 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:11.883 17:35:38 -- host/auth.sh@49 -- # echo DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:19:11.883 17:35:38 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:19:11.883 17:35:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:11.883 17:35:38 -- host/auth.sh@68 -- # digest=sha512 00:19:11.883 17:35:38 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:11.883 17:35:38 -- host/auth.sh@68 -- # keyid=0 00:19:11.883 17:35:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:11.883 17:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.883 17:35:38 -- common/autotest_common.sh@10 -- # set +x 00:19:11.883 17:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.883 17:35:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:11.883 17:35:38 -- nvmf/common.sh@717 -- # local ip 00:19:11.883 17:35:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:11.883 17:35:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:11.883 17:35:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.883 17:35:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.883 17:35:38 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:11.883 17:35:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:11.883 17:35:38 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:11.883 17:35:38 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:11.883 17:35:38 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:11.883 17:35:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:11.883 17:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.883 17:35:38 -- common/autotest_common.sh@10 -- # set +x 00:19:12.143 nvme0n1 00:19:12.143 17:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.143 17:35:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.143 17:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.143 17:35:38 -- common/autotest_common.sh@10 -- # set +x 00:19:12.143 17:35:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:12.143 17:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.143 17:35:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.143 17:35:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.143 17:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.143 17:35:38 -- common/autotest_common.sh@10 -- # set +x 00:19:12.143 17:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.143 17:35:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:12.143 17:35:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:12.143 17:35:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:12.143 17:35:38 -- host/auth.sh@44 -- # digest=sha512 00:19:12.143 17:35:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:12.143 17:35:38 -- host/auth.sh@44 -- # keyid=1 00:19:12.143 17:35:38 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:12.143 17:35:38 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:12.143 17:35:38 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:12.143 17:35:38 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:12.143 17:35:38 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:19:12.143 17:35:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:12.143 17:35:38 -- host/auth.sh@68 -- # digest=sha512 00:19:12.143 17:35:38 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:12.143 17:35:38 -- host/auth.sh@68 -- # keyid=1 00:19:12.143 17:35:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.143 17:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.143 17:35:38 -- common/autotest_common.sh@10 -- # set +x 00:19:12.143 17:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.143 17:35:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:12.143 17:35:38 -- nvmf/common.sh@717 -- # local ip 00:19:12.143 17:35:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:12.143 17:35:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:12.143 17:35:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.143 17:35:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.143 17:35:38 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:12.143 17:35:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:12.143 17:35:38 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:12.143 17:35:38 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:12.143 17:35:38 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:12.143 17:35:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:12.143 17:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.143 17:35:38 -- common/autotest_common.sh@10 -- # set +x 00:19:12.401 nvme0n1 00:19:12.401 17:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.401 17:35:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.401 17:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.401 17:35:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:12.401 17:35:38 -- common/autotest_common.sh@10 -- # set +x 00:19:12.401 17:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.401 17:35:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.401 17:35:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.401 17:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.401 17:35:38 -- common/autotest_common.sh@10 -- # set +x 00:19:12.401 17:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.401 17:35:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:12.402 17:35:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:12.402 17:35:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:12.402 17:35:38 -- host/auth.sh@44 -- # digest=sha512 00:19:12.402 17:35:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:12.402 17:35:38 -- host/auth.sh@44 -- # keyid=2 00:19:12.402 17:35:38 -- host/auth.sh@45 -- # key=DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:12.402 17:35:38 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:12.402 17:35:38 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:12.402 17:35:38 -- host/auth.sh@49 -- # echo DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:12.402 17:35:38 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:19:12.402 17:35:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:12.402 17:35:38 -- host/auth.sh@68 -- # digest=sha512 00:19:12.402 17:35:38 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:12.402 17:35:38 -- host/auth.sh@68 -- # keyid=2 00:19:12.402 17:35:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.402 17:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.402 17:35:38 -- common/autotest_common.sh@10 -- # set +x 00:19:12.402 17:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.402 17:35:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:12.402 17:35:38 -- nvmf/common.sh@717 -- # local ip 00:19:12.402 17:35:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:12.402 17:35:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:12.402 17:35:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.402 17:35:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.402 17:35:38 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:12.402 17:35:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:12.402 17:35:38 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:12.402 17:35:38 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:12.402 17:35:38 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:12.402 17:35:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:12.402 17:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.402 17:35:38 -- common/autotest_common.sh@10 -- # set +x 00:19:12.676 nvme0n1 00:19:12.676 17:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.676 17:35:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.676 17:35:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.676 17:35:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:12.676 17:35:38 -- common/autotest_common.sh@10 -- # set +x 00:19:12.676 17:35:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.676 17:35:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.676 17:35:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.676 17:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.676 17:35:39 -- common/autotest_common.sh@10 -- # set +x 00:19:12.676 17:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.676 17:35:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:12.676 17:35:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:12.676 17:35:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:12.676 17:35:39 -- host/auth.sh@44 -- # digest=sha512 00:19:12.676 17:35:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:12.676 17:35:39 -- host/auth.sh@44 -- # keyid=3 00:19:12.676 17:35:39 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:12.676 17:35:39 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:12.676 17:35:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:12.676 17:35:39 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:12.676 17:35:39 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:19:12.676 17:35:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:12.676 17:35:39 -- host/auth.sh@68 -- # digest=sha512 00:19:12.676 17:35:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:12.676 17:35:39 -- host/auth.sh@68 -- # keyid=3 00:19:12.676 17:35:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.676 17:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.676 17:35:39 -- common/autotest_common.sh@10 -- # set +x 00:19:12.676 17:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.676 17:35:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:12.676 17:35:39 -- nvmf/common.sh@717 -- # local ip 00:19:12.676 17:35:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:12.676 17:35:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:12.676 17:35:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.676 17:35:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.676 17:35:39 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:12.676 17:35:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:12.676 17:35:39 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:12.676 17:35:39 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:12.676 17:35:39 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:12.676 17:35:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:12.676 17:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.676 17:35:39 -- common/autotest_common.sh@10 -- # set +x 00:19:12.935 nvme0n1 00:19:12.935 17:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.935 17:35:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.935 17:35:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:12.935 17:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.935 17:35:39 -- common/autotest_common.sh@10 -- # set +x 00:19:12.935 17:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.935 17:35:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.935 17:35:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.935 17:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.935 17:35:39 -- common/autotest_common.sh@10 -- # set +x 00:19:12.935 17:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.935 17:35:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:12.935 17:35:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:12.935 17:35:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:12.935 17:35:39 -- host/auth.sh@44 -- # digest=sha512 00:19:12.935 17:35:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:12.935 17:35:39 -- host/auth.sh@44 -- # keyid=4 00:19:12.935 17:35:39 -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:12.935 17:35:39 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:12.935 17:35:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:12.935 17:35:39 -- host/auth.sh@49 -- # echo DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:12.935 17:35:39 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:19:12.935 17:35:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:12.935 17:35:39 -- host/auth.sh@68 -- # digest=sha512 00:19:12.935 17:35:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:12.935 17:35:39 -- host/auth.sh@68 -- # keyid=4 00:19:12.935 17:35:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.935 17:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.935 17:35:39 -- common/autotest_common.sh@10 -- # set +x 00:19:12.935 17:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.935 17:35:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:12.935 17:35:39 -- nvmf/common.sh@717 -- # local ip 00:19:12.935 17:35:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:12.935 17:35:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:12.935 17:35:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.935 17:35:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.935 17:35:39 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:12.935 17:35:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:12.935 17:35:39 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:12.935 17:35:39 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:12.935 17:35:39 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:12.935 17:35:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:12.935 17:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.935 17:35:39 -- common/autotest_common.sh@10 -- # set +x 00:19:13.195 nvme0n1 00:19:13.195 17:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.195 17:35:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.195 17:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.195 17:35:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:13.195 17:35:39 -- common/autotest_common.sh@10 -- # set +x 00:19:13.195 17:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.195 17:35:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.195 17:35:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.195 17:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.195 17:35:39 -- common/autotest_common.sh@10 -- # set +x 00:19:13.195 17:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.195 17:35:39 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.195 17:35:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:13.195 17:35:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:13.195 17:35:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:13.195 17:35:39 -- host/auth.sh@44 -- # digest=sha512 00:19:13.195 17:35:39 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:13.195 17:35:39 -- host/auth.sh@44 -- # keyid=0 00:19:13.195 17:35:39 -- host/auth.sh@45 -- # key=DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:19:13.195 17:35:39 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:13.195 17:35:39 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:13.195 17:35:39 -- host/auth.sh@49 -- # echo DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:19:13.195 17:35:39 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:19:13.195 17:35:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:13.195 17:35:39 -- host/auth.sh@68 -- # digest=sha512 00:19:13.195 17:35:39 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:13.195 17:35:39 -- host/auth.sh@68 -- # keyid=0 00:19:13.195 17:35:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:13.195 17:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.195 17:35:39 -- common/autotest_common.sh@10 -- # set +x 00:19:13.195 17:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.195 17:35:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:13.195 17:35:39 -- nvmf/common.sh@717 -- # local ip 00:19:13.195 17:35:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:13.195 17:35:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:13.195 17:35:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.195 17:35:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.195 17:35:39 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:13.195 17:35:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:13.195 17:35:39 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:13.196 17:35:39 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:13.196 17:35:39 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:13.196 17:35:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:13.196 17:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.196 17:35:39 -- common/autotest_common.sh@10 -- # set +x 00:19:13.456 nvme0n1 00:19:13.456 17:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.456 17:35:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.456 17:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.456 17:35:39 -- common/autotest_common.sh@10 -- # set +x 00:19:13.456 17:35:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:13.456 17:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.456 17:35:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.456 17:35:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.456 17:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.456 17:35:39 -- common/autotest_common.sh@10 -- # set +x 00:19:13.456 17:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.456 17:35:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:13.456 17:35:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:13.456 17:35:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:13.456 17:35:39 -- host/auth.sh@44 -- # digest=sha512 00:19:13.456 17:35:39 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:13.456 17:35:39 -- host/auth.sh@44 -- # keyid=1 00:19:13.456 17:35:39 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:13.456 17:35:39 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:13.456 17:35:39 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:13.456 17:35:39 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:13.456 17:35:39 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:19:13.456 17:35:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:13.456 17:35:39 -- host/auth.sh@68 -- # digest=sha512 00:19:13.456 17:35:39 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:13.456 17:35:39 -- host/auth.sh@68 -- # keyid=1 00:19:13.456 17:35:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:13.456 17:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.456 17:35:39 -- common/autotest_common.sh@10 -- # set +x 00:19:13.456 17:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.456 17:35:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:13.456 17:35:39 -- nvmf/common.sh@717 -- # local ip 00:19:13.456 17:35:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:13.456 17:35:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:13.456 17:35:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.456 17:35:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.456 17:35:39 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:13.456 17:35:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:13.456 17:35:39 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:13.456 17:35:39 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:13.456 17:35:39 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:13.456 17:35:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:13.456 17:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.456 17:35:39 -- common/autotest_common.sh@10 -- # set +x 00:19:13.716 nvme0n1 00:19:13.716 17:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.716 17:35:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.716 17:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.716 17:35:40 -- common/autotest_common.sh@10 -- # set +x 00:19:13.716 17:35:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:13.716 17:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.716 17:35:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.716 17:35:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.716 17:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.716 17:35:40 -- common/autotest_common.sh@10 -- # set +x 00:19:13.716 17:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.716 17:35:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:13.716 17:35:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:13.716 17:35:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:13.716 17:35:40 -- host/auth.sh@44 -- # digest=sha512 00:19:13.716 17:35:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:13.716 17:35:40 -- host/auth.sh@44 -- # keyid=2 00:19:13.716 17:35:40 -- host/auth.sh@45 -- # key=DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:13.716 17:35:40 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:13.716 17:35:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:13.716 17:35:40 -- host/auth.sh@49 -- # echo DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:13.716 17:35:40 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:19:13.716 17:35:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:13.716 17:35:40 -- host/auth.sh@68 -- # digest=sha512 00:19:13.716 17:35:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:13.716 17:35:40 -- host/auth.sh@68 -- # keyid=2 00:19:13.716 17:35:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:13.716 17:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.716 17:35:40 -- common/autotest_common.sh@10 -- # set +x 00:19:13.716 17:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.716 17:35:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:13.716 17:35:40 -- nvmf/common.sh@717 -- # local ip 00:19:13.716 17:35:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:13.716 17:35:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:13.716 17:35:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.716 17:35:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.716 17:35:40 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:13.716 17:35:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:13.716 17:35:40 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:13.716 17:35:40 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:13.716 17:35:40 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:13.716 17:35:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:13.716 17:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.716 17:35:40 -- common/autotest_common.sh@10 -- # set +x 00:19:13.975 nvme0n1 00:19:13.975 17:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.975 17:35:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.975 17:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.975 17:35:40 -- common/autotest_common.sh@10 -- # set +x 00:19:13.975 17:35:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:13.975 17:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.975 17:35:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.975 17:35:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.975 17:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.975 17:35:40 -- common/autotest_common.sh@10 -- # set +x 00:19:13.975 17:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.975 17:35:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:13.975 17:35:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:13.975 17:35:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:13.975 17:35:40 -- host/auth.sh@44 -- # digest=sha512 00:19:13.975 17:35:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:13.975 17:35:40 -- host/auth.sh@44 -- # keyid=3 00:19:13.975 17:35:40 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:13.975 17:35:40 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:13.975 17:35:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:13.975 17:35:40 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:13.975 17:35:40 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:19:13.975 17:35:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:13.976 17:35:40 -- host/auth.sh@68 -- # digest=sha512 00:19:13.976 17:35:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:13.976 17:35:40 -- host/auth.sh@68 -- # keyid=3 00:19:13.976 17:35:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:13.976 17:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.976 17:35:40 -- common/autotest_common.sh@10 -- # set +x 00:19:13.976 17:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.976 17:35:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:13.976 17:35:40 -- nvmf/common.sh@717 -- # local ip 00:19:13.976 17:35:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:13.976 17:35:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:13.976 17:35:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.976 17:35:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.976 17:35:40 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:13.976 17:35:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:13.976 17:35:40 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:13.976 17:35:40 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:13.976 17:35:40 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:13.976 17:35:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:13.976 17:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.976 17:35:40 -- common/autotest_common.sh@10 -- # set +x 00:19:14.237 nvme0n1 00:19:14.237 17:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.237 17:35:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.237 17:35:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:14.237 17:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.237 17:35:40 -- common/autotest_common.sh@10 -- # set +x 00:19:14.237 17:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.237 17:35:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.237 17:35:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.237 17:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.237 17:35:40 -- common/autotest_common.sh@10 -- # set +x 00:19:14.237 17:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.237 17:35:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:14.237 17:35:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:14.237 17:35:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:14.237 17:35:40 -- host/auth.sh@44 -- # digest=sha512 00:19:14.237 17:35:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:14.237 17:35:40 -- host/auth.sh@44 -- # keyid=4 00:19:14.237 17:35:40 -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:14.237 17:35:40 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:14.237 17:35:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:14.237 17:35:40 -- host/auth.sh@49 -- # echo DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:14.237 17:35:40 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:19:14.237 17:35:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:14.237 17:35:40 -- host/auth.sh@68 -- # digest=sha512 00:19:14.237 17:35:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:14.237 17:35:40 -- host/auth.sh@68 -- # keyid=4 00:19:14.237 17:35:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.237 17:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.237 17:35:40 -- common/autotest_common.sh@10 -- # set +x 00:19:14.237 17:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.237 17:35:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:14.237 17:35:40 -- nvmf/common.sh@717 -- # local ip 00:19:14.237 17:35:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:14.237 17:35:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:14.237 17:35:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.237 17:35:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.237 17:35:40 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:14.237 17:35:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:14.237 17:35:40 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:14.237 17:35:40 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:14.237 17:35:40 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:14.237 17:35:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:14.237 17:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.237 17:35:40 -- common/autotest_common.sh@10 -- # set +x 00:19:14.497 nvme0n1 00:19:14.497 17:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.497 17:35:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.497 17:35:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:14.497 17:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.497 17:35:40 -- common/autotest_common.sh@10 -- # set +x 00:19:14.497 17:35:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.497 17:35:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.497 17:35:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.497 17:35:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.497 17:35:40 -- common/autotest_common.sh@10 -- # set +x 00:19:14.497 17:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.497 17:35:41 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.498 17:35:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:14.498 17:35:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:14.498 17:35:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:14.498 17:35:41 -- host/auth.sh@44 -- # digest=sha512 00:19:14.498 17:35:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:14.498 17:35:41 -- host/auth.sh@44 -- # keyid=0 00:19:14.498 17:35:41 -- host/auth.sh@45 -- # key=DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:19:14.498 17:35:41 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:14.498 17:35:41 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:14.498 17:35:41 -- host/auth.sh@49 -- # echo DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:19:14.498 17:35:41 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:19:14.498 17:35:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:14.498 17:35:41 -- host/auth.sh@68 -- # digest=sha512 00:19:14.498 17:35:41 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:14.498 17:35:41 -- host/auth.sh@68 -- # keyid=0 00:19:14.498 17:35:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:14.498 17:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.498 17:35:41 -- common/autotest_common.sh@10 -- # set +x 00:19:14.498 17:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.498 17:35:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:14.498 17:35:41 -- nvmf/common.sh@717 -- # local ip 00:19:14.498 17:35:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:14.498 17:35:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:14.498 17:35:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.498 17:35:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.498 17:35:41 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:14.498 17:35:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:14.498 17:35:41 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:14.498 17:35:41 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:14.498 17:35:41 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:14.498 17:35:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:14.498 17:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.498 17:35:41 -- common/autotest_common.sh@10 -- # set +x 00:19:15.067 nvme0n1 00:19:15.067 17:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.067 17:35:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.067 17:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.067 17:35:41 -- common/autotest_common.sh@10 -- # set +x 00:19:15.067 17:35:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:15.067 17:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.067 17:35:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.067 17:35:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.067 17:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.067 17:35:41 -- common/autotest_common.sh@10 -- # set +x 00:19:15.067 17:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.067 17:35:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:15.067 17:35:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:15.067 17:35:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:15.067 17:35:41 -- host/auth.sh@44 -- # digest=sha512 00:19:15.067 17:35:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:15.067 17:35:41 -- host/auth.sh@44 -- # keyid=1 00:19:15.067 17:35:41 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:15.067 17:35:41 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:15.067 17:35:41 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:15.067 17:35:41 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:15.067 17:35:41 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:19:15.067 17:35:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:15.067 17:35:41 -- host/auth.sh@68 -- # digest=sha512 00:19:15.067 17:35:41 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:15.067 17:35:41 -- host/auth.sh@68 -- # keyid=1 00:19:15.067 17:35:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.067 17:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.067 17:35:41 -- common/autotest_common.sh@10 -- # set +x 00:19:15.067 17:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.067 17:35:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:15.067 17:35:41 -- nvmf/common.sh@717 -- # local ip 00:19:15.067 17:35:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:15.067 17:35:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:15.067 17:35:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.067 17:35:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.067 17:35:41 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:15.067 17:35:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:15.067 17:35:41 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:15.067 17:35:41 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:15.067 17:35:41 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:15.067 17:35:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:15.067 17:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.067 17:35:41 -- common/autotest_common.sh@10 -- # set +x 00:19:15.327 nvme0n1 00:19:15.327 17:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.327 17:35:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.327 17:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.327 17:35:41 -- common/autotest_common.sh@10 -- # set +x 00:19:15.327 17:35:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:15.327 17:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.327 17:35:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.327 17:35:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.327 17:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.327 17:35:41 -- common/autotest_common.sh@10 -- # set +x 00:19:15.327 17:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.327 17:35:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:15.327 17:35:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:15.327 17:35:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:15.327 17:35:41 -- host/auth.sh@44 -- # digest=sha512 00:19:15.327 17:35:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:15.327 17:35:41 -- host/auth.sh@44 -- # keyid=2 00:19:15.327 17:35:41 -- host/auth.sh@45 -- # key=DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:15.327 17:35:41 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:15.327 17:35:41 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:15.327 17:35:41 -- host/auth.sh@49 -- # echo DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:15.327 17:35:41 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:19:15.327 17:35:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:15.327 17:35:41 -- host/auth.sh@68 -- # digest=sha512 00:19:15.327 17:35:41 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:15.327 17:35:41 -- host/auth.sh@68 -- # keyid=2 00:19:15.327 17:35:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.327 17:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.327 17:35:41 -- common/autotest_common.sh@10 -- # set +x 00:19:15.327 17:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.327 17:35:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:15.327 17:35:41 -- nvmf/common.sh@717 -- # local ip 00:19:15.327 17:35:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:15.327 17:35:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:15.327 17:35:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.327 17:35:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.327 17:35:41 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:15.327 17:35:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:15.327 17:35:41 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:15.327 17:35:41 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:15.327 17:35:41 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:15.327 17:35:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:15.327 17:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.327 17:35:41 -- common/autotest_common.sh@10 -- # set +x 00:19:15.585 nvme0n1 00:19:15.585 17:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.585 17:35:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.585 17:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.585 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:19:15.585 17:35:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:15.585 17:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.844 17:35:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.844 17:35:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.844 17:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.844 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:19:15.844 17:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.844 17:35:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:15.844 17:35:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:15.844 17:35:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:15.844 17:35:42 -- host/auth.sh@44 -- # digest=sha512 00:19:15.844 17:35:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:15.844 17:35:42 -- host/auth.sh@44 -- # keyid=3 00:19:15.844 17:35:42 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:15.844 17:35:42 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:15.844 17:35:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:15.844 17:35:42 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:15.844 17:35:42 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:19:15.844 17:35:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:15.844 17:35:42 -- host/auth.sh@68 -- # digest=sha512 00:19:15.844 17:35:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:15.844 17:35:42 -- host/auth.sh@68 -- # keyid=3 00:19:15.844 17:35:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.844 17:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.844 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:19:15.844 17:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.844 17:35:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:15.844 17:35:42 -- nvmf/common.sh@717 -- # local ip 00:19:15.844 17:35:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:15.844 17:35:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:15.844 17:35:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.844 17:35:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.844 17:35:42 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:15.844 17:35:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:15.844 17:35:42 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:15.844 17:35:42 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:15.844 17:35:42 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:15.844 17:35:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:15.844 17:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.844 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:19:16.104 nvme0n1 00:19:16.104 17:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.104 17:35:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.104 17:35:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:16.104 17:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.104 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:19:16.104 17:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.104 17:35:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.104 17:35:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.104 17:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.104 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:19:16.104 17:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.104 17:35:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:16.104 17:35:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:16.104 17:35:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:16.104 17:35:42 -- host/auth.sh@44 -- # digest=sha512 00:19:16.104 17:35:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:16.104 17:35:42 -- host/auth.sh@44 -- # keyid=4 00:19:16.104 17:35:42 -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:16.104 17:35:42 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:16.104 17:35:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:16.104 17:35:42 -- host/auth.sh@49 -- # echo DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:16.104 17:35:42 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:19:16.104 17:35:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:16.104 17:35:42 -- host/auth.sh@68 -- # digest=sha512 00:19:16.104 17:35:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:16.104 17:35:42 -- host/auth.sh@68 -- # keyid=4 00:19:16.104 17:35:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:16.104 17:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.104 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:19:16.104 17:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.104 17:35:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:16.104 17:35:42 -- nvmf/common.sh@717 -- # local ip 00:19:16.104 17:35:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:16.104 17:35:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:16.104 17:35:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.104 17:35:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.104 17:35:42 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:16.104 17:35:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:16.104 17:35:42 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:16.104 17:35:42 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:16.104 17:35:42 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:16.104 17:35:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:16.104 17:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.104 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:19:16.362 nvme0n1 00:19:16.362 17:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.362 17:35:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.362 17:35:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:16.362 17:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.362 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:19:16.362 17:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.622 17:35:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.622 17:35:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.622 17:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.622 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:19:16.622 17:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.622 17:35:42 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.622 17:35:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:16.622 17:35:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:16.622 17:35:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:16.622 17:35:42 -- host/auth.sh@44 -- # digest=sha512 00:19:16.622 17:35:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:16.622 17:35:42 -- host/auth.sh@44 -- # keyid=0 00:19:16.622 17:35:42 -- host/auth.sh@45 -- # key=DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:19:16.622 17:35:42 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:16.622 17:35:42 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:16.622 17:35:42 -- host/auth.sh@49 -- # echo DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:19:16.622 17:35:42 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:19:16.622 17:35:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:16.622 17:35:42 -- host/auth.sh@68 -- # digest=sha512 00:19:16.622 17:35:42 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:16.622 17:35:42 -- host/auth.sh@68 -- # keyid=0 00:19:16.622 17:35:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:16.622 17:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.622 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:19:16.622 17:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.622 17:35:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:16.622 17:35:42 -- nvmf/common.sh@717 -- # local ip 00:19:16.622 17:35:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:16.622 17:35:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:16.622 17:35:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.622 17:35:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.622 17:35:42 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:16.622 17:35:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:16.622 17:35:42 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:16.622 17:35:42 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:16.622 17:35:42 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:16.622 17:35:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:16.622 17:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.622 17:35:42 -- common/autotest_common.sh@10 -- # set +x 00:19:17.190 nvme0n1 00:19:17.190 17:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.190 17:35:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.190 17:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.190 17:35:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:17.190 17:35:43 -- common/autotest_common.sh@10 -- # set +x 00:19:17.190 17:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.190 17:35:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.190 17:35:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.190 17:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.190 17:35:43 -- common/autotest_common.sh@10 -- # set +x 00:19:17.190 17:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.190 17:35:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:17.190 17:35:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:17.190 17:35:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:17.190 17:35:43 -- host/auth.sh@44 -- # digest=sha512 00:19:17.190 17:35:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:17.190 17:35:43 -- host/auth.sh@44 -- # keyid=1 00:19:17.190 17:35:43 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:17.190 17:35:43 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:17.190 17:35:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:17.190 17:35:43 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:17.190 17:35:43 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:19:17.190 17:35:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:17.190 17:35:43 -- host/auth.sh@68 -- # digest=sha512 00:19:17.190 17:35:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:17.190 17:35:43 -- host/auth.sh@68 -- # keyid=1 00:19:17.190 17:35:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:17.190 17:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.190 17:35:43 -- common/autotest_common.sh@10 -- # set +x 00:19:17.190 17:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.190 17:35:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:17.190 17:35:43 -- nvmf/common.sh@717 -- # local ip 00:19:17.190 17:35:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:17.190 17:35:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:17.190 17:35:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.190 17:35:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.190 17:35:43 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:17.190 17:35:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:17.190 17:35:43 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:17.190 17:35:43 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:17.190 17:35:43 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:17.190 17:35:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:17.190 17:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.190 17:35:43 -- common/autotest_common.sh@10 -- # set +x 00:19:17.760 nvme0n1 00:19:17.760 17:35:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.760 17:35:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.760 17:35:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:17.760 17:35:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.760 17:35:44 -- common/autotest_common.sh@10 -- # set +x 00:19:17.760 17:35:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.760 17:35:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.760 17:35:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.760 17:35:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.760 17:35:44 -- common/autotest_common.sh@10 -- # set +x 00:19:17.760 17:35:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.760 17:35:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:17.760 17:35:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:17.760 17:35:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:17.760 17:35:44 -- host/auth.sh@44 -- # digest=sha512 00:19:17.760 17:35:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:17.760 17:35:44 -- host/auth.sh@44 -- # keyid=2 00:19:17.760 17:35:44 -- host/auth.sh@45 -- # key=DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:17.760 17:35:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:17.760 17:35:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:17.760 17:35:44 -- host/auth.sh@49 -- # echo DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:17.760 17:35:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:19:17.760 17:35:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:17.760 17:35:44 -- host/auth.sh@68 -- # digest=sha512 00:19:17.760 17:35:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:17.760 17:35:44 -- host/auth.sh@68 -- # keyid=2 00:19:17.760 17:35:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:17.760 17:35:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.760 17:35:44 -- common/autotest_common.sh@10 -- # set +x 00:19:17.760 17:35:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.760 17:35:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:17.760 17:35:44 -- nvmf/common.sh@717 -- # local ip 00:19:17.760 17:35:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:17.760 17:35:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:17.760 17:35:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.760 17:35:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.760 17:35:44 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:17.760 17:35:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:17.760 17:35:44 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:17.760 17:35:44 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:17.760 17:35:44 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:17.760 17:35:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:17.760 17:35:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.760 17:35:44 -- common/autotest_common.sh@10 -- # set +x 00:19:18.328 nvme0n1 00:19:18.328 17:35:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.328 17:35:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.328 17:35:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.328 17:35:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.328 17:35:44 -- common/autotest_common.sh@10 -- # set +x 00:19:18.328 17:35:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.328 17:35:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.328 17:35:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.328 17:35:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.328 17:35:44 -- common/autotest_common.sh@10 -- # set +x 00:19:18.328 17:35:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.328 17:35:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.328 17:35:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:18.328 17:35:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.328 17:35:44 -- host/auth.sh@44 -- # digest=sha512 00:19:18.328 17:35:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:18.328 17:35:44 -- host/auth.sh@44 -- # keyid=3 00:19:18.328 17:35:44 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:18.328 17:35:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:18.328 17:35:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:18.328 17:35:44 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:18.328 17:35:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:19:18.328 17:35:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.328 17:35:44 -- host/auth.sh@68 -- # digest=sha512 00:19:18.328 17:35:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:18.328 17:35:44 -- host/auth.sh@68 -- # keyid=3 00:19:18.328 17:35:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:18.328 17:35:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.328 17:35:44 -- common/autotest_common.sh@10 -- # set +x 00:19:18.328 17:35:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.328 17:35:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.328 17:35:44 -- nvmf/common.sh@717 -- # local ip 00:19:18.328 17:35:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.328 17:35:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.328 17:35:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.328 17:35:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.328 17:35:44 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:18.328 17:35:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:18.328 17:35:44 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:18.328 17:35:44 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:18.328 17:35:44 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:18.328 17:35:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:18.328 17:35:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.328 17:35:44 -- common/autotest_common.sh@10 -- # set +x 00:19:18.897 nvme0n1 00:19:18.897 17:35:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.897 17:35:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.897 17:35:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.897 17:35:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.897 17:35:45 -- common/autotest_common.sh@10 -- # set +x 00:19:18.897 17:35:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.897 17:35:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.897 17:35:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.897 17:35:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.897 17:35:45 -- common/autotest_common.sh@10 -- # set +x 00:19:19.156 17:35:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.156 17:35:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:19.156 17:35:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:19.157 17:35:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:19.157 17:35:45 -- host/auth.sh@44 -- # digest=sha512 00:19:19.157 17:35:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:19.157 17:35:45 -- host/auth.sh@44 -- # keyid=4 00:19:19.157 17:35:45 -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:19.157 17:35:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:19.157 17:35:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:19.157 17:35:45 -- host/auth.sh@49 -- # echo DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:19.157 17:35:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:19:19.157 17:35:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:19.157 17:35:45 -- host/auth.sh@68 -- # digest=sha512 00:19:19.157 17:35:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:19.157 17:35:45 -- host/auth.sh@68 -- # keyid=4 00:19:19.157 17:35:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:19.157 17:35:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.157 17:35:45 -- common/autotest_common.sh@10 -- # set +x 00:19:19.157 17:35:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.157 17:35:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:19.157 17:35:45 -- nvmf/common.sh@717 -- # local ip 00:19:19.157 17:35:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:19.157 17:35:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:19.157 17:35:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.157 17:35:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.157 17:35:45 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:19.157 17:35:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:19.157 17:35:45 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:19.157 17:35:45 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:19.157 17:35:45 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:19.157 17:35:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:19.157 17:35:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.157 17:35:45 -- common/autotest_common.sh@10 -- # set +x 00:19:19.724 nvme0n1 00:19:19.724 17:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.724 17:35:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.724 17:35:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:19.724 17:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.724 17:35:46 -- common/autotest_common.sh@10 -- # set +x 00:19:19.724 17:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.724 17:35:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.724 17:35:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.724 17:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.724 17:35:46 -- common/autotest_common.sh@10 -- # set +x 00:19:19.724 17:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.724 17:35:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.724 17:35:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:19.724 17:35:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:19.724 17:35:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:19.724 17:35:46 -- host/auth.sh@44 -- # digest=sha512 00:19:19.724 17:35:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:19.724 17:35:46 -- host/auth.sh@44 -- # keyid=0 00:19:19.724 17:35:46 -- host/auth.sh@45 -- # key=DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:19:19.724 17:35:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:19.724 17:35:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:19.724 17:35:46 -- host/auth.sh@49 -- # echo DHHC-1:00:MGUxYzFiMWViODk3ZjliODJlMTljNzZlODU3MGU4NDcMFqo7: 00:19:19.724 17:35:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:19:19.724 17:35:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:19.724 17:35:46 -- host/auth.sh@68 -- # digest=sha512 00:19:19.724 17:35:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:19.724 17:35:46 -- host/auth.sh@68 -- # keyid=0 00:19:19.724 17:35:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:19.724 17:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.724 17:35:46 -- common/autotest_common.sh@10 -- # set +x 00:19:19.724 17:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.724 17:35:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:19.724 17:35:46 -- nvmf/common.sh@717 -- # local ip 00:19:19.724 17:35:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:19.724 17:35:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:19.724 17:35:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.724 17:35:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.724 17:35:46 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:19.724 17:35:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:19.724 17:35:46 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:19.724 17:35:46 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:19.724 17:35:46 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:19.724 17:35:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:19.724 17:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.724 17:35:46 -- common/autotest_common.sh@10 -- # set +x 00:19:20.660 nvme0n1 00:19:20.660 17:35:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.660 17:35:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.660 17:35:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.660 17:35:47 -- common/autotest_common.sh@10 -- # set +x 00:19:20.660 17:35:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:20.660 17:35:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.660 17:35:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.660 17:35:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.660 17:35:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.660 17:35:47 -- common/autotest_common.sh@10 -- # set +x 00:19:20.660 17:35:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.660 17:35:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:20.660 17:35:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:20.660 17:35:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:20.660 17:35:47 -- host/auth.sh@44 -- # digest=sha512 00:19:20.660 17:35:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:20.660 17:35:47 -- host/auth.sh@44 -- # keyid=1 00:19:20.660 17:35:47 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:20.660 17:35:47 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:20.660 17:35:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:20.660 17:35:47 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:20.660 17:35:47 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:19:20.660 17:35:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:20.660 17:35:47 -- host/auth.sh@68 -- # digest=sha512 00:19:20.660 17:35:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:20.660 17:35:47 -- host/auth.sh@68 -- # keyid=1 00:19:20.660 17:35:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:20.660 17:35:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.660 17:35:47 -- common/autotest_common.sh@10 -- # set +x 00:19:20.919 17:35:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.919 17:35:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:20.919 17:35:47 -- nvmf/common.sh@717 -- # local ip 00:19:20.919 17:35:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:20.919 17:35:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:20.919 17:35:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.919 17:35:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.919 17:35:47 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:20.919 17:35:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:20.920 17:35:47 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:20.920 17:35:47 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:20.920 17:35:47 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:20.920 17:35:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:20.920 17:35:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.920 17:35:47 -- common/autotest_common.sh@10 -- # set +x 00:19:21.859 nvme0n1 00:19:21.859 17:35:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.859 17:35:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.859 17:35:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:21.859 17:35:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.859 17:35:48 -- common/autotest_common.sh@10 -- # set +x 00:19:21.859 17:35:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.859 17:35:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.859 17:35:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.859 17:35:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.859 17:35:48 -- common/autotest_common.sh@10 -- # set +x 00:19:21.859 17:35:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.859 17:35:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:21.859 17:35:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:21.859 17:35:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:21.859 17:35:48 -- host/auth.sh@44 -- # digest=sha512 00:19:21.859 17:35:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:21.859 17:35:48 -- host/auth.sh@44 -- # keyid=2 00:19:21.859 17:35:48 -- host/auth.sh@45 -- # key=DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:21.859 17:35:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:21.859 17:35:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:21.859 17:35:48 -- host/auth.sh@49 -- # echo DHHC-1:01:MmM2ZjlkZTAxNGVmNzk0MmUxNzU2NGJkYmViYTAzYjjuByWo: 00:19:21.859 17:35:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:19:21.859 17:35:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:21.859 17:35:48 -- host/auth.sh@68 -- # digest=sha512 00:19:21.859 17:35:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:21.859 17:35:48 -- host/auth.sh@68 -- # keyid=2 00:19:21.859 17:35:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:21.859 17:35:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.859 17:35:48 -- common/autotest_common.sh@10 -- # set +x 00:19:21.859 17:35:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.859 17:35:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:21.859 17:35:48 -- nvmf/common.sh@717 -- # local ip 00:19:21.859 17:35:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:21.859 17:35:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:21.859 17:35:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.859 17:35:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.859 17:35:48 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:21.859 17:35:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:21.859 17:35:48 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:21.859 17:35:48 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:21.859 17:35:48 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:21.859 17:35:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:21.859 17:35:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.859 17:35:48 -- common/autotest_common.sh@10 -- # set +x 00:19:22.797 nvme0n1 00:19:22.797 17:35:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.797 17:35:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.797 17:35:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:22.797 17:35:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.797 17:35:49 -- common/autotest_common.sh@10 -- # set +x 00:19:22.797 17:35:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.055 17:35:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.055 17:35:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.055 17:35:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.055 17:35:49 -- common/autotest_common.sh@10 -- # set +x 00:19:23.055 17:35:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.055 17:35:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.055 17:35:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:23.055 17:35:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.055 17:35:49 -- host/auth.sh@44 -- # digest=sha512 00:19:23.055 17:35:49 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:23.055 17:35:49 -- host/auth.sh@44 -- # keyid=3 00:19:23.055 17:35:49 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:23.055 17:35:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:23.055 17:35:49 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:23.055 17:35:49 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE2NjVjNWFlZWM1NmMyZjMyMWRlNmNiM2U3NzNhMjIwMmM2ZTU5YTEwMjVjYmRm5xsVuA==: 00:19:23.055 17:35:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:19:23.055 17:35:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.055 17:35:49 -- host/auth.sh@68 -- # digest=sha512 00:19:23.055 17:35:49 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:23.055 17:35:49 -- host/auth.sh@68 -- # keyid=3 00:19:23.055 17:35:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.055 17:35:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.055 17:35:49 -- common/autotest_common.sh@10 -- # set +x 00:19:23.055 17:35:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.055 17:35:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.055 17:35:49 -- nvmf/common.sh@717 -- # local ip 00:19:23.055 17:35:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.055 17:35:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.055 17:35:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.055 17:35:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.055 17:35:49 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:23.055 17:35:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:23.055 17:35:49 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:23.055 17:35:49 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:23.055 17:35:49 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:23.055 17:35:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:23.055 17:35:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.055 17:35:49 -- common/autotest_common.sh@10 -- # set +x 00:19:23.991 nvme0n1 00:19:23.991 17:35:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.991 17:35:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.991 17:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.991 17:35:50 -- common/autotest_common.sh@10 -- # set +x 00:19:23.991 17:35:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.991 17:35:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.991 17:35:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.991 17:35:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.991 17:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.991 17:35:50 -- common/autotest_common.sh@10 -- # set +x 00:19:23.991 17:35:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.991 17:35:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.991 17:35:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:23.991 17:35:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.991 17:35:50 -- host/auth.sh@44 -- # digest=sha512 00:19:23.991 17:35:50 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:23.991 17:35:50 -- host/auth.sh@44 -- # keyid=4 00:19:23.991 17:35:50 -- host/auth.sh@45 -- # key=DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:23.991 17:35:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:23.991 17:35:50 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:23.991 17:35:50 -- host/auth.sh@49 -- # echo DHHC-1:03:NGVlYmIxMGRiOWRlNzczY2JiNGFhNWZlZTkzNWE5ZGZiMTZhOTljODgzMTZlZmM3NzUzZjdkMjJiOGQwYjg3Yxj3QTo=: 00:19:23.991 17:35:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:19:23.991 17:35:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.991 17:35:50 -- host/auth.sh@68 -- # digest=sha512 00:19:23.991 17:35:50 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:23.991 17:35:50 -- host/auth.sh@68 -- # keyid=4 00:19:23.991 17:35:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:23.991 17:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.991 17:35:50 -- common/autotest_common.sh@10 -- # set +x 00:19:23.991 17:35:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.991 17:35:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.991 17:35:50 -- nvmf/common.sh@717 -- # local ip 00:19:23.991 17:35:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.991 17:35:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.991 17:35:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.991 17:35:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.991 17:35:50 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:23.991 17:35:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:23.991 17:35:50 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:23.992 17:35:50 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:23.992 17:35:50 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:23.992 17:35:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:23.992 17:35:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.992 17:35:50 -- common/autotest_common.sh@10 -- # set +x 00:19:25.369 nvme0n1 00:19:25.369 17:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.369 17:35:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.369 17:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.369 17:35:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:25.369 17:35:51 -- common/autotest_common.sh@10 -- # set +x 00:19:25.369 17:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.369 17:35:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.369 17:35:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.369 17:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.369 17:35:51 -- common/autotest_common.sh@10 -- # set +x 00:19:25.369 17:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.369 17:35:51 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:25.369 17:35:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:25.369 17:35:51 -- host/auth.sh@44 -- # digest=sha256 00:19:25.369 17:35:51 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:25.369 17:35:51 -- host/auth.sh@44 -- # keyid=1 00:19:25.369 17:35:51 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:25.369 17:35:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:25.369 17:35:51 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:25.369 17:35:51 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2ZiZGIwOWM4OGFiMTZjZjY0MzZjZjkwNmFkZDNkNmQyODY2MmZlN2MwNjAzMTI3H/wT7g==: 00:19:25.369 17:35:51 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:25.369 17:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.369 17:35:51 -- common/autotest_common.sh@10 -- # set +x 00:19:25.369 17:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.369 17:35:51 -- host/auth.sh@119 -- # get_main_ns_ip 00:19:25.369 17:35:51 -- nvmf/common.sh@717 -- # local ip 00:19:25.369 17:35:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:25.369 17:35:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:25.369 17:35:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.369 17:35:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.369 17:35:51 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:25.369 17:35:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:25.369 17:35:51 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:25.369 17:35:51 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:25.369 17:35:51 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:25.369 17:35:51 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:25.369 17:35:51 -- common/autotest_common.sh@638 -- # local es=0 00:19:25.369 17:35:51 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:25.369 17:35:51 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:25.369 17:35:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:25.369 17:35:51 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:25.369 17:35:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:25.369 17:35:51 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:25.369 17:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.369 17:35:51 -- common/autotest_common.sh@10 -- # set +x 00:19:25.369 request: 00:19:25.369 { 00:19:25.369 "name": "nvme0", 00:19:25.369 "trtype": "rdma", 00:19:25.369 "traddr": "192.168.100.8", 00:19:25.369 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:25.369 "adrfam": "ipv4", 00:19:25.369 "trsvcid": "4420", 00:19:25.369 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:25.369 "method": "bdev_nvme_attach_controller", 00:19:25.369 "req_id": 1 00:19:25.369 } 00:19:25.369 Got JSON-RPC error response 00:19:25.369 response: 00:19:25.369 { 00:19:25.369 "code": -32602, 00:19:25.369 "message": "Invalid parameters" 00:19:25.369 } 00:19:25.369 17:35:51 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:25.369 17:35:51 -- common/autotest_common.sh@641 -- # es=1 00:19:25.369 17:35:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:25.369 17:35:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:25.369 17:35:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:25.369 17:35:51 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.369 17:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.369 17:35:51 -- host/auth.sh@121 -- # jq length 00:19:25.369 17:35:51 -- common/autotest_common.sh@10 -- # set +x 00:19:25.369 17:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.369 17:35:51 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:19:25.369 17:35:51 -- host/auth.sh@124 -- # get_main_ns_ip 00:19:25.369 17:35:51 -- nvmf/common.sh@717 -- # local ip 00:19:25.369 17:35:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:25.369 17:35:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:25.369 17:35:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:25.369 17:35:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:25.369 17:35:51 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:19:25.369 17:35:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:25.369 17:35:51 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:19:25.369 17:35:51 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:19:25.369 17:35:51 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:19:25.369 17:35:51 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:25.369 17:35:51 -- common/autotest_common.sh@638 -- # local es=0 00:19:25.369 17:35:51 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:25.369 17:35:51 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:25.369 17:35:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:25.369 17:35:51 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:25.369 17:35:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:25.370 17:35:51 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:25.370 17:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.370 17:35:51 -- common/autotest_common.sh@10 -- # set +x 00:19:25.370 request: 00:19:25.370 { 00:19:25.370 "name": "nvme0", 00:19:25.370 "trtype": "rdma", 00:19:25.370 "traddr": "192.168.100.8", 00:19:25.370 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:25.370 "adrfam": "ipv4", 00:19:25.370 "trsvcid": "4420", 00:19:25.370 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:25.370 "dhchap_key": "key2", 00:19:25.370 "method": "bdev_nvme_attach_controller", 00:19:25.370 "req_id": 1 00:19:25.370 } 00:19:25.370 Got JSON-RPC error response 00:19:25.370 response: 00:19:25.370 { 00:19:25.370 "code": -32602, 00:19:25.370 "message": "Invalid parameters" 00:19:25.370 } 00:19:25.370 17:35:51 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:25.370 17:35:51 -- common/autotest_common.sh@641 -- # es=1 00:19:25.370 17:35:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:25.370 17:35:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:25.370 17:35:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:25.370 17:35:51 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.370 17:35:51 -- host/auth.sh@127 -- # jq length 00:19:25.370 17:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.370 17:35:51 -- common/autotest_common.sh@10 -- # set +x 00:19:25.370 17:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.370 17:35:51 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:19:25.370 17:35:51 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:19:25.370 17:35:51 -- host/auth.sh@130 -- # cleanup 00:19:25.370 17:35:51 -- host/auth.sh@24 -- # nvmftestfini 00:19:25.370 17:35:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:25.370 17:35:51 -- nvmf/common.sh@117 -- # sync 00:19:25.370 17:35:51 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:25.370 17:35:51 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:25.370 17:35:51 -- nvmf/common.sh@120 -- # set +e 00:19:25.370 17:35:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.370 17:35:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:25.370 rmmod nvme_rdma 00:19:25.370 rmmod nvme_fabrics 00:19:25.370 17:35:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.370 17:35:51 -- nvmf/common.sh@124 -- # set -e 00:19:25.370 17:35:51 -- nvmf/common.sh@125 -- # return 0 00:19:25.370 17:35:51 -- nvmf/common.sh@478 -- # '[' -n 2060867 ']' 00:19:25.370 17:35:51 -- nvmf/common.sh@479 -- # killprocess 2060867 00:19:25.370 17:35:51 -- common/autotest_common.sh@936 -- # '[' -z 2060867 ']' 00:19:25.370 17:35:51 -- common/autotest_common.sh@940 -- # kill -0 2060867 00:19:25.370 17:35:51 -- common/autotest_common.sh@941 -- # uname 00:19:25.370 17:35:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:25.370 17:35:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2060867 00:19:25.370 17:35:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:25.370 17:35:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:25.370 17:35:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2060867' 00:19:25.370 killing process with pid 2060867 00:19:25.370 17:35:51 -- common/autotest_common.sh@955 -- # kill 2060867 00:19:25.370 17:35:51 -- common/autotest_common.sh@960 -- # wait 2060867 00:19:25.628 17:35:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:25.628 17:35:52 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:25.628 17:35:52 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:25.628 17:35:52 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:25.628 17:35:52 -- host/auth.sh@27 -- # clean_kernel_target 00:19:25.628 17:35:52 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:25.628 17:35:52 -- nvmf/common.sh@675 -- # echo 0 00:19:25.628 17:35:52 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:25.628 17:35:52 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:25.628 17:35:52 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:25.628 17:35:52 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:25.887 17:35:52 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:19:25.887 17:35:52 -- nvmf/common.sh@684 -- # modprobe -r nvmet_rdma nvmet 00:19:25.887 17:35:52 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:19:26.821 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:26.821 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:26.821 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:26.821 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:26.821 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:26.821 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:26.821 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:26.822 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:26.822 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:26.822 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:26.822 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:26.822 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:26.822 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:26.822 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:26.822 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:26.822 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:28.201 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:19:28.201 17:35:54 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.xPx /tmp/spdk.key-null.HAK /tmp/spdk.key-sha256.JKm /tmp/spdk.key-sha384.qoH /tmp/spdk.key-sha512.dOc /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:19:28.201 17:35:54 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:19:29.156 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:19:29.156 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:19:29.156 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:19:29.156 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:19:29.156 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:19:29.156 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:19:29.156 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:19:29.156 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:19:29.156 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:19:29.156 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:19:29.156 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:19:29.156 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:19:29.156 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:19:29.156 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:19:29.156 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:19:29.156 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:19:29.156 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:19:29.156 00:19:29.156 real 0m49.566s 00:19:29.156 user 0m43.843s 00:19:29.156 sys 0m5.740s 00:19:29.156 17:35:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:29.156 17:35:55 -- common/autotest_common.sh@10 -- # set +x 00:19:29.156 ************************************ 00:19:29.156 END TEST nvmf_auth 00:19:29.156 ************************************ 00:19:29.441 17:35:55 -- nvmf/nvmf.sh@104 -- # [[ rdma == \t\c\p ]] 00:19:29.441 17:35:55 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:19:29.442 17:35:55 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:19:29.442 17:35:55 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:19:29.442 17:35:55 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:19:29.442 17:35:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:29.442 17:35:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:29.442 17:35:55 -- common/autotest_common.sh@10 -- # set +x 00:19:29.442 ************************************ 00:19:29.442 START TEST nvmf_bdevperf 00:19:29.442 ************************************ 00:19:29.442 17:35:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:19:29.442 * Looking for test storage... 00:19:29.442 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:29.442 17:35:55 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:29.442 17:35:55 -- nvmf/common.sh@7 -- # uname -s 00:19:29.442 17:35:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:29.442 17:35:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:29.442 17:35:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:29.442 17:35:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:29.442 17:35:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:29.442 17:35:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:29.442 17:35:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:29.442 17:35:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:29.442 17:35:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:29.442 17:35:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:29.442 17:35:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.442 17:35:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.442 17:35:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:29.442 17:35:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:29.442 17:35:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:29.442 17:35:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:29.442 17:35:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:29.442 17:35:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:29.442 17:35:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:29.442 17:35:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:29.442 17:35:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.442 17:35:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.442 17:35:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.442 17:35:55 -- paths/export.sh@5 -- # export PATH 00:19:29.442 17:35:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:29.442 17:35:55 -- nvmf/common.sh@47 -- # : 0 00:19:29.442 17:35:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:29.442 17:35:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:29.442 17:35:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:29.442 17:35:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:29.442 17:35:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:29.442 17:35:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:29.442 17:35:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:29.442 17:35:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:29.442 17:35:55 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:29.442 17:35:55 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:29.442 17:35:55 -- host/bdevperf.sh@24 -- # nvmftestinit 00:19:29.442 17:35:55 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:29.442 17:35:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.442 17:35:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:29.442 17:35:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:29.442 17:35:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:29.442 17:35:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.442 17:35:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.442 17:35:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.442 17:35:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:29.442 17:35:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:29.442 17:35:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:29.442 17:35:55 -- common/autotest_common.sh@10 -- # set +x 00:19:31.357 17:35:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:31.357 17:35:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:31.357 17:35:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:31.357 17:35:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:31.357 17:35:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:31.357 17:35:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:31.357 17:35:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:31.357 17:35:57 -- nvmf/common.sh@295 -- # net_devs=() 00:19:31.357 17:35:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:31.357 17:35:57 -- nvmf/common.sh@296 -- # e810=() 00:19:31.357 17:35:57 -- nvmf/common.sh@296 -- # local -ga e810 00:19:31.357 17:35:57 -- nvmf/common.sh@297 -- # x722=() 00:19:31.357 17:35:57 -- nvmf/common.sh@297 -- # local -ga x722 00:19:31.357 17:35:57 -- nvmf/common.sh@298 -- # mlx=() 00:19:31.357 17:35:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:31.357 17:35:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.357 17:35:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.357 17:35:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.357 17:35:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.357 17:35:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.357 17:35:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.357 17:35:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.357 17:35:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.357 17:35:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.357 17:35:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.357 17:35:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.357 17:35:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:31.357 17:35:57 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:31.357 17:35:57 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:31.357 17:35:57 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:31.357 17:35:57 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:31.357 17:35:57 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:31.357 17:35:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:31.357 17:35:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.357 17:35:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:19:31.357 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:19:31.357 17:35:57 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:31.357 17:35:57 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:31.357 17:35:57 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:19:31.357 17:35:57 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:31.357 17:35:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.357 17:35:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:19:31.357 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:19:31.357 17:35:57 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:31.357 17:35:57 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:31.357 17:35:57 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:19:31.357 17:35:57 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:31.357 17:35:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:31.357 17:35:57 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:31.357 17:35:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.357 17:35:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.357 17:35:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:31.357 17:35:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.357 17:35:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:19:31.357 Found net devices under 0000:09:00.0: mlx_0_0 00:19:31.357 17:35:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.357 17:35:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.357 17:35:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.357 17:35:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:31.357 17:35:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.357 17:35:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:19:31.357 Found net devices under 0000:09:00.1: mlx_0_1 00:19:31.357 17:35:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.357 17:35:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:31.357 17:35:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:31.357 17:35:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:31.357 17:35:57 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:31.357 17:35:57 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:31.357 17:35:57 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:31.357 17:35:57 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:31.357 17:35:57 -- nvmf/common.sh@58 -- # uname 00:19:31.357 17:35:57 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:31.357 17:35:57 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:31.357 17:35:57 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:31.357 17:35:57 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:31.357 17:35:57 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:31.357 17:35:57 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:31.357 17:35:57 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:31.358 17:35:57 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:31.358 17:35:57 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:31.358 17:35:57 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:31.358 17:35:57 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:31.358 17:35:57 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:31.358 17:35:57 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:31.358 17:35:57 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:31.358 17:35:57 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:31.358 17:35:57 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:31.358 17:35:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:31.358 17:35:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.358 17:35:57 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:31.358 17:35:57 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:31.358 17:35:57 -- nvmf/common.sh@105 -- # continue 2 00:19:31.358 17:35:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:31.358 17:35:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.358 17:35:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:31.358 17:35:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.358 17:35:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:31.358 17:35:57 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:31.358 17:35:57 -- nvmf/common.sh@105 -- # continue 2 00:19:31.358 17:35:57 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:31.358 17:35:57 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:31.358 17:35:57 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:31.358 17:35:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:31.358 17:35:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:31.358 17:35:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:31.358 17:35:57 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:31.358 17:35:57 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:31.358 17:35:57 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:31.358 82: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:31.358 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:19:31.358 altname enp9s0f0np0 00:19:31.358 inet 192.168.100.8/24 scope global mlx_0_0 00:19:31.358 valid_lft forever preferred_lft forever 00:19:31.358 17:35:57 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:31.358 17:35:57 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:31.358 17:35:57 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:31.358 17:35:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:31.358 17:35:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:31.358 17:35:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:31.358 17:35:57 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:31.358 17:35:57 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:31.358 17:35:57 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:31.358 83: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:31.358 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:19:31.358 altname enp9s0f1np1 00:19:31.358 inet 192.168.100.9/24 scope global mlx_0_1 00:19:31.358 valid_lft forever preferred_lft forever 00:19:31.358 17:35:57 -- nvmf/common.sh@411 -- # return 0 00:19:31.358 17:35:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:31.358 17:35:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:31.358 17:35:57 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:31.358 17:35:57 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:31.358 17:35:57 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:31.358 17:35:57 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:31.358 17:35:57 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:31.358 17:35:57 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:31.358 17:35:57 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:31.358 17:35:57 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:31.358 17:35:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:31.358 17:35:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.358 17:35:57 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:31.358 17:35:57 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:31.358 17:35:57 -- nvmf/common.sh@105 -- # continue 2 00:19:31.358 17:35:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:31.358 17:35:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.358 17:35:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:31.358 17:35:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.358 17:35:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:31.358 17:35:57 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:31.358 17:35:57 -- nvmf/common.sh@105 -- # continue 2 00:19:31.358 17:35:57 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:31.358 17:35:57 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:31.358 17:35:57 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:31.358 17:35:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:31.358 17:35:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:31.358 17:35:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:31.358 17:35:57 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:31.358 17:35:57 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:31.358 17:35:57 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:31.358 17:35:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:31.358 17:35:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:31.358 17:35:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:31.358 17:35:57 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:31.358 192.168.100.9' 00:19:31.358 17:35:57 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:31.358 192.168.100.9' 00:19:31.358 17:35:57 -- nvmf/common.sh@446 -- # head -n 1 00:19:31.358 17:35:57 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:31.358 17:35:57 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:31.358 192.168.100.9' 00:19:31.358 17:35:57 -- nvmf/common.sh@447 -- # tail -n +2 00:19:31.358 17:35:57 -- nvmf/common.sh@447 -- # head -n 1 00:19:31.358 17:35:57 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:31.358 17:35:57 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:31.358 17:35:57 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:31.358 17:35:57 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:31.358 17:35:57 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:31.358 17:35:57 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:31.358 17:35:57 -- host/bdevperf.sh@25 -- # tgt_init 00:19:31.358 17:35:57 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:19:31.358 17:35:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:31.358 17:35:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:31.358 17:35:57 -- common/autotest_common.sh@10 -- # set +x 00:19:31.358 17:35:57 -- nvmf/common.sh@470 -- # nvmfpid=2070386 00:19:31.358 17:35:57 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:31.358 17:35:57 -- nvmf/common.sh@471 -- # waitforlisten 2070386 00:19:31.358 17:35:57 -- common/autotest_common.sh@817 -- # '[' -z 2070386 ']' 00:19:31.358 17:35:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.358 17:35:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:31.358 17:35:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.358 17:35:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:31.358 17:35:57 -- common/autotest_common.sh@10 -- # set +x 00:19:31.617 [2024-04-18 17:35:57.935550] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:19:31.617 [2024-04-18 17:35:57.935636] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.617 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.617 [2024-04-18 17:35:58.000361] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:31.617 [2024-04-18 17:35:58.102570] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.617 [2024-04-18 17:35:58.102627] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.617 [2024-04-18 17:35:58.102649] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.617 [2024-04-18 17:35:58.102660] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.617 [2024-04-18 17:35:58.102670] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.617 [2024-04-18 17:35:58.102757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.617 [2024-04-18 17:35:58.102822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.617 [2024-04-18 17:35:58.102827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.875 17:35:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:31.875 17:35:58 -- common/autotest_common.sh@850 -- # return 0 00:19:31.875 17:35:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:31.875 17:35:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:31.875 17:35:58 -- common/autotest_common.sh@10 -- # set +x 00:19:31.875 17:35:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.875 17:35:58 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:31.875 17:35:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.875 17:35:58 -- common/autotest_common.sh@10 -- # set +x 00:19:31.875 [2024-04-18 17:35:58.252758] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20a0380/0x20a4870) succeed. 00:19:31.875 [2024-04-18 17:35:58.263644] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20a18d0/0x20e5f00) succeed. 00:19:31.875 17:35:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.875 17:35:58 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:31.875 17:35:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.875 17:35:58 -- common/autotest_common.sh@10 -- # set +x 00:19:31.875 Malloc0 00:19:31.875 17:35:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.875 17:35:58 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:31.875 17:35:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.875 17:35:58 -- common/autotest_common.sh@10 -- # set +x 00:19:31.875 17:35:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.875 17:35:58 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:31.875 17:35:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.875 17:35:58 -- common/autotest_common.sh@10 -- # set +x 00:19:31.875 17:35:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.875 17:35:58 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:31.875 17:35:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.875 17:35:58 -- common/autotest_common.sh@10 -- # set +x 00:19:32.134 [2024-04-18 17:35:58.418701] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:32.134 17:35:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.134 17:35:58 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:19:32.134 17:35:58 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:19:32.134 17:35:58 -- nvmf/common.sh@521 -- # config=() 00:19:32.134 17:35:58 -- nvmf/common.sh@521 -- # local subsystem config 00:19:32.134 17:35:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:32.134 17:35:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:32.134 { 00:19:32.134 "params": { 00:19:32.134 "name": "Nvme$subsystem", 00:19:32.134 "trtype": "$TEST_TRANSPORT", 00:19:32.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.134 "adrfam": "ipv4", 00:19:32.134 "trsvcid": "$NVMF_PORT", 00:19:32.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.134 "hdgst": ${hdgst:-false}, 00:19:32.134 "ddgst": ${ddgst:-false} 00:19:32.134 }, 00:19:32.134 "method": "bdev_nvme_attach_controller" 00:19:32.134 } 00:19:32.134 EOF 00:19:32.134 )") 00:19:32.134 17:35:58 -- nvmf/common.sh@543 -- # cat 00:19:32.134 17:35:58 -- nvmf/common.sh@545 -- # jq . 00:19:32.134 17:35:58 -- nvmf/common.sh@546 -- # IFS=, 00:19:32.134 17:35:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:32.134 "params": { 00:19:32.134 "name": "Nvme1", 00:19:32.134 "trtype": "rdma", 00:19:32.134 "traddr": "192.168.100.8", 00:19:32.134 "adrfam": "ipv4", 00:19:32.134 "trsvcid": "4420", 00:19:32.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:32.134 "hdgst": false, 00:19:32.134 "ddgst": false 00:19:32.134 }, 00:19:32.134 "method": "bdev_nvme_attach_controller" 00:19:32.134 }' 00:19:32.134 [2024-04-18 17:35:58.459158] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:19:32.134 [2024-04-18 17:35:58.459248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2070412 ] 00:19:32.134 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.134 [2024-04-18 17:35:58.519955] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.134 [2024-04-18 17:35:58.626834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.392 Running I/O for 1 seconds... 00:19:33.330 00:19:33.330 Latency(us) 00:19:33.330 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.330 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:33.330 Verification LBA range: start 0x0 length 0x4000 00:19:33.330 Nvme1n1 : 1.01 14886.50 58.15 0.00 0.00 8548.29 3470.98 13592.65 00:19:33.330 =================================================================================================================== 00:19:33.330 Total : 14886.50 58.15 0.00 0.00 8548.29 3470.98 13592.65 00:19:33.587 17:36:00 -- host/bdevperf.sh@30 -- # bdevperfpid=2070693 00:19:33.588 17:36:00 -- host/bdevperf.sh@32 -- # sleep 3 00:19:33.588 17:36:00 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:19:33.588 17:36:00 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:19:33.588 17:36:00 -- nvmf/common.sh@521 -- # config=() 00:19:33.588 17:36:00 -- nvmf/common.sh@521 -- # local subsystem config 00:19:33.588 17:36:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:33.588 17:36:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:33.588 { 00:19:33.588 "params": { 00:19:33.588 "name": "Nvme$subsystem", 00:19:33.588 "trtype": "$TEST_TRANSPORT", 00:19:33.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.588 "adrfam": "ipv4", 00:19:33.588 "trsvcid": "$NVMF_PORT", 00:19:33.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.588 "hdgst": ${hdgst:-false}, 00:19:33.588 "ddgst": ${ddgst:-false} 00:19:33.588 }, 00:19:33.588 "method": "bdev_nvme_attach_controller" 00:19:33.588 } 00:19:33.588 EOF 00:19:33.588 )") 00:19:33.588 17:36:00 -- nvmf/common.sh@543 -- # cat 00:19:33.588 17:36:00 -- nvmf/common.sh@545 -- # jq . 00:19:33.588 17:36:00 -- nvmf/common.sh@546 -- # IFS=, 00:19:33.588 17:36:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:33.588 "params": { 00:19:33.588 "name": "Nvme1", 00:19:33.588 "trtype": "rdma", 00:19:33.588 "traddr": "192.168.100.8", 00:19:33.588 "adrfam": "ipv4", 00:19:33.588 "trsvcid": "4420", 00:19:33.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.588 "hdgst": false, 00:19:33.588 "ddgst": false 00:19:33.588 }, 00:19:33.588 "method": "bdev_nvme_attach_controller" 00:19:33.588 }' 00:19:33.846 [2024-04-18 17:36:00.150203] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:19:33.846 [2024-04-18 17:36:00.150285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2070693 ] 00:19:33.846 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.846 [2024-04-18 17:36:00.211826] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.846 [2024-04-18 17:36:00.320903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.104 Running I/O for 15 seconds... 00:19:36.634 17:36:03 -- host/bdevperf.sh@33 -- # kill -9 2070386 00:19:36.634 17:36:03 -- host/bdevperf.sh@35 -- # sleep 3 00:19:38.014 [2024-04-18 17:36:04.134070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.014 [2024-04-18 17:36:04.134133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.014 [2024-04-18 17:36:04.134169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.014 [2024-04-18 17:36:04.134187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.014 [2024-04-18 17:36:04.134207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.014 [2024-04-18 17:36:04.134223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.014 [2024-04-18 17:36:04.134241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.014 [2024-04-18 17:36:04.134257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.014 [2024-04-18 17:36:04.134274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.014 [2024-04-18 17:36:04.134290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.014 [2024-04-18 17:36:04.134318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.014 [2024-04-18 17:36:04.134337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.014 [2024-04-18 17:36:04.134354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.014 [2024-04-18 17:36:04.134369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.014 [2024-04-18 17:36:04.134396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.014 [2024-04-18 17:36:04.134414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.014 [2024-04-18 17:36:04.134431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.014 [2024-04-18 17:36:04.134451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.014 [2024-04-18 17:36:04.134469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.014 [2024-04-18 17:36:04.134485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.014 [2024-04-18 17:36:04.134502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.014 [2024-04-18 17:36:04.134517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.134534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.134551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.134569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.134585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.134603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.134619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.134638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.134654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.134672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.134689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.134707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.134724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.134742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.134762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.134781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.134796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.134813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.134828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.134846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.134861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.134878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.134893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.134910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.134926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.134943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.134958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.134974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.134989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.015 [2024-04-18 17:36:04.135779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.015 [2024-04-18 17:36:04.135796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.135813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.135830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.135848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.135863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.135880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.135895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.135914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.135929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.135946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.135961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.135982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.135999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.136969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.136986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.137002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.137019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.137035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.137052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.016 [2024-04-18 17:36:04.137068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.016 [2024-04-18 17:36:04.137084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.137974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.137992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.138008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.138026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.138045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.138064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.138081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.138098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.138114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.138131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.138148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.138166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.138181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.138199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.138214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.138231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.138247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.138265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.138281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.138298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:38.017 [2024-04-18 17:36:04.138314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.138331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x187200 00:19:38.017 [2024-04-18 17:36:04.138348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.138366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x187200 00:19:38.017 [2024-04-18 17:36:04.138388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.017 [2024-04-18 17:36:04.138409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x187200 00:19:38.018 [2024-04-18 17:36:04.138426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:de90 p:0 m:0 dnr:0 00:19:38.018 [2024-04-18 17:36:04.140053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:38.018 [2024-04-18 17:36:04.140092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:38.018 [2024-04-18 17:36:04.140110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40984 len:8 PRP1 0x0 PRP2 0x0 00:19:38.018 [2024-04-18 17:36:04.140126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.018 [2024-04-18 17:36:04.140198] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a00 was disconnected and freed. reset controller. 00:19:38.018 [2024-04-18 17:36:04.143809] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:38.018 [2024-04-18 17:36:04.162289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:38.018 [2024-04-18 17:36:04.165480] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:38.018 [2024-04-18 17:36:04.165518] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:38.018 [2024-04-18 17:36:04.165534] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:19:38.953 [2024-04-18 17:36:05.169587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:38.953 [2024-04-18 17:36:05.169630] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:38.953 [2024-04-18 17:36:05.169873] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:38.954 [2024-04-18 17:36:05.169898] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:38.954 [2024-04-18 17:36:05.169917] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:38.954 [2024-04-18 17:36:05.172879] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.954 [2024-04-18 17:36:05.173528] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.954 [2024-04-18 17:36:05.186728] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:38.954 [2024-04-18 17:36:05.189996] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:38.954 [2024-04-18 17:36:05.190029] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:38.954 [2024-04-18 17:36:05.190050] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:19:39.892 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2070386 Killed "${NVMF_APP[@]}" "$@" 00:19:39.892 17:36:06 -- host/bdevperf.sh@36 -- # tgt_init 00:19:39.892 17:36:06 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:19:39.892 17:36:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:39.892 17:36:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:39.892 17:36:06 -- common/autotest_common.sh@10 -- # set +x 00:19:39.892 17:36:06 -- nvmf/common.sh@470 -- # nvmfpid=2071457 00:19:39.892 17:36:06 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:39.892 17:36:06 -- nvmf/common.sh@471 -- # waitforlisten 2071457 00:19:39.892 17:36:06 -- common/autotest_common.sh@817 -- # '[' -z 2071457 ']' 00:19:39.892 17:36:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.892 17:36:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:39.892 17:36:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.892 17:36:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:39.892 17:36:06 -- common/autotest_common.sh@10 -- # set +x 00:19:39.892 [2024-04-18 17:36:06.164370] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:19:39.892 [2024-04-18 17:36:06.164456] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.892 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.892 [2024-04-18 17:36:06.194040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:39.892 [2024-04-18 17:36:06.194073] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:39.892 [2024-04-18 17:36:06.194308] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:39.892 [2024-04-18 17:36:06.194331] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:39.892 [2024-04-18 17:36:06.194346] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:39.892 [2024-04-18 17:36:06.197911] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:39.892 [2024-04-18 17:36:06.201696] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:39.893 [2024-04-18 17:36:06.204632] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:39.893 [2024-04-18 17:36:06.204674] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:39.893 [2024-04-18 17:36:06.204688] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:19:39.893 [2024-04-18 17:36:06.230459] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:39.893 [2024-04-18 17:36:06.348862] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.893 [2024-04-18 17:36:06.348927] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.893 [2024-04-18 17:36:06.348953] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.893 [2024-04-18 17:36:06.348966] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.893 [2024-04-18 17:36:06.348978] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.893 [2024-04-18 17:36:06.349070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.893 [2024-04-18 17:36:06.349127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:39.893 [2024-04-18 17:36:06.349131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.830 17:36:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:40.830 17:36:07 -- common/autotest_common.sh@850 -- # return 0 00:19:40.830 17:36:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:40.830 17:36:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:40.830 17:36:07 -- common/autotest_common.sh@10 -- # set +x 00:19:40.830 17:36:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.830 17:36:07 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:40.830 17:36:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.830 17:36:07 -- common/autotest_common.sh@10 -- # set +x 00:19:40.830 [2024-04-18 17:36:07.150567] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbb3380/0xbb7870) succeed. 00:19:40.830 [2024-04-18 17:36:07.161233] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbb48d0/0xbf8f00) succeed. 00:19:40.830 [2024-04-18 17:36:07.208801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:40.830 [2024-04-18 17:36:07.208860] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:40.830 [2024-04-18 17:36:07.209079] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:40.830 [2024-04-18 17:36:07.209101] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:40.830 [2024-04-18 17:36:07.209126] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:40.830 [2024-04-18 17:36:07.212319] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.830 [2024-04-18 17:36:07.217997] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:40.830 [2024-04-18 17:36:07.220993] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:40.830 [2024-04-18 17:36:07.221023] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:40.830 [2024-04-18 17:36:07.221036] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:19:40.830 17:36:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.830 17:36:07 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:40.830 17:36:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.830 17:36:07 -- common/autotest_common.sh@10 -- # set +x 00:19:40.830 Malloc0 00:19:40.830 17:36:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.830 17:36:07 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:40.830 17:36:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.830 17:36:07 -- common/autotest_common.sh@10 -- # set +x 00:19:40.830 17:36:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.830 17:36:07 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:40.830 17:36:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.830 17:36:07 -- common/autotest_common.sh@10 -- # set +x 00:19:40.830 17:36:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.830 17:36:07 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:40.830 17:36:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.830 17:36:07 -- common/autotest_common.sh@10 -- # set +x 00:19:40.830 [2024-04-18 17:36:07.328161] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:40.830 17:36:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.830 17:36:07 -- host/bdevperf.sh@38 -- # wait 2070693 00:19:41.766 [2024-04-18 17:36:08.224984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:41.766 [2024-04-18 17:36:08.225022] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:41.766 [2024-04-18 17:36:08.225246] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:41.766 [2024-04-18 17:36:08.225268] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:41.766 [2024-04-18 17:36:08.225284] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:41.766 [2024-04-18 17:36:08.228475] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:41.766 [2024-04-18 17:36:08.234058] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:41.766 [2024-04-18 17:36:08.280202] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:49.898 00:19:49.898 Latency(us) 00:19:49.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.898 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:49.898 Verification LBA range: start 0x0 length 0x4000 00:19:49.898 Nvme1n1 : 15.01 9738.18 38.04 10552.20 0.00 6285.89 576.47 1031488.09 00:19:49.898 =================================================================================================================== 00:19:49.898 Total : 9738.18 38.04 10552.20 0.00 6285.89 576.47 1031488.09 00:19:49.898 17:36:15 -- host/bdevperf.sh@39 -- # sync 00:19:49.898 17:36:15 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:49.898 17:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.898 17:36:15 -- common/autotest_common.sh@10 -- # set +x 00:19:49.898 17:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.898 17:36:15 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:19:49.898 17:36:15 -- host/bdevperf.sh@44 -- # nvmftestfini 00:19:49.898 17:36:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:49.898 17:36:15 -- nvmf/common.sh@117 -- # sync 00:19:49.898 17:36:15 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:49.898 17:36:15 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:49.898 17:36:15 -- nvmf/common.sh@120 -- # set +e 00:19:49.898 17:36:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:49.898 17:36:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:49.898 rmmod nvme_rdma 00:19:49.898 rmmod nvme_fabrics 00:19:49.898 17:36:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:49.898 17:36:15 -- nvmf/common.sh@124 -- # set -e 00:19:49.898 17:36:15 -- nvmf/common.sh@125 -- # return 0 00:19:49.898 17:36:15 -- nvmf/common.sh@478 -- # '[' -n 2071457 ']' 00:19:49.898 17:36:15 -- nvmf/common.sh@479 -- # killprocess 2071457 00:19:49.898 17:36:15 -- common/autotest_common.sh@936 -- # '[' -z 2071457 ']' 00:19:49.898 17:36:15 -- common/autotest_common.sh@940 -- # kill -0 2071457 00:19:49.898 17:36:15 -- common/autotest_common.sh@941 -- # uname 00:19:49.898 17:36:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:49.898 17:36:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2071457 00:19:49.898 17:36:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:49.898 17:36:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:49.898 17:36:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2071457' 00:19:49.898 killing process with pid 2071457 00:19:49.898 17:36:15 -- common/autotest_common.sh@955 -- # kill 2071457 00:19:49.898 17:36:15 -- common/autotest_common.sh@960 -- # wait 2071457 00:19:49.898 17:36:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:49.898 17:36:16 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:49.898 00:19:49.898 real 0m20.506s 00:19:49.898 user 1m2.511s 00:19:49.898 sys 0m2.579s 00:19:49.898 17:36:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:49.898 17:36:16 -- common/autotest_common.sh@10 -- # set +x 00:19:49.898 ************************************ 00:19:49.898 END TEST nvmf_bdevperf 00:19:49.898 ************************************ 00:19:49.898 17:36:16 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:19:49.898 17:36:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:49.898 17:36:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:49.898 17:36:16 -- common/autotest_common.sh@10 -- # set +x 00:19:50.162 ************************************ 00:19:50.162 START TEST nvmf_target_disconnect 00:19:50.162 ************************************ 00:19:50.162 17:36:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:19:50.162 * Looking for test storage... 00:19:50.162 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:50.162 17:36:16 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.162 17:36:16 -- nvmf/common.sh@7 -- # uname -s 00:19:50.162 17:36:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.162 17:36:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.162 17:36:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.162 17:36:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.162 17:36:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.162 17:36:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.162 17:36:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.162 17:36:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.162 17:36:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.162 17:36:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.162 17:36:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.162 17:36:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.162 17:36:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.162 17:36:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.162 17:36:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.162 17:36:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.162 17:36:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:50.162 17:36:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.162 17:36:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.162 17:36:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.162 17:36:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.162 17:36:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.162 17:36:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.162 17:36:16 -- paths/export.sh@5 -- # export PATH 00:19:50.162 17:36:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.162 17:36:16 -- nvmf/common.sh@47 -- # : 0 00:19:50.162 17:36:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:50.162 17:36:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:50.162 17:36:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.162 17:36:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.162 17:36:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.162 17:36:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:50.162 17:36:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:50.162 17:36:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:50.162 17:36:16 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:19:50.162 17:36:16 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:19:50.162 17:36:16 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:19:50.162 17:36:16 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:19:50.162 17:36:16 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:50.162 17:36:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.162 17:36:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:50.162 17:36:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:50.162 17:36:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:50.162 17:36:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.162 17:36:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.162 17:36:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.162 17:36:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:50.162 17:36:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:50.162 17:36:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:50.162 17:36:16 -- common/autotest_common.sh@10 -- # set +x 00:19:52.064 17:36:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:52.064 17:36:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.064 17:36:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.064 17:36:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.064 17:36:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.064 17:36:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.064 17:36:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.064 17:36:18 -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.064 17:36:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.064 17:36:18 -- nvmf/common.sh@296 -- # e810=() 00:19:52.064 17:36:18 -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.064 17:36:18 -- nvmf/common.sh@297 -- # x722=() 00:19:52.064 17:36:18 -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.064 17:36:18 -- nvmf/common.sh@298 -- # mlx=() 00:19:52.064 17:36:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.064 17:36:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.064 17:36:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.064 17:36:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.064 17:36:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.064 17:36:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.064 17:36:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.064 17:36:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.064 17:36:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.064 17:36:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.064 17:36:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.064 17:36:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.064 17:36:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.064 17:36:18 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:52.064 17:36:18 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:52.064 17:36:18 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:52.064 17:36:18 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:52.064 17:36:18 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:52.064 17:36:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.064 17:36:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.064 17:36:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:19:52.064 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:19:52.064 17:36:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:52.064 17:36:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:52.064 17:36:18 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:19:52.064 17:36:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:52.064 17:36:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.064 17:36:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:19:52.064 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:19:52.064 17:36:18 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:52.064 17:36:18 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:52.064 17:36:18 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:19:52.064 17:36:18 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:52.064 17:36:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.064 17:36:18 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:52.064 17:36:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.064 17:36:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.064 17:36:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:52.064 17:36:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.064 17:36:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:19:52.064 Found net devices under 0000:09:00.0: mlx_0_0 00:19:52.064 17:36:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.064 17:36:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.064 17:36:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.064 17:36:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:52.064 17:36:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.064 17:36:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:19:52.064 Found net devices under 0000:09:00.1: mlx_0_1 00:19:52.064 17:36:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.064 17:36:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:52.064 17:36:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:52.064 17:36:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:52.064 17:36:18 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:52.064 17:36:18 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:52.064 17:36:18 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:52.064 17:36:18 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:52.064 17:36:18 -- nvmf/common.sh@58 -- # uname 00:19:52.064 17:36:18 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:52.064 17:36:18 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:52.064 17:36:18 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:52.065 17:36:18 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:52.065 17:36:18 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:52.065 17:36:18 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:52.065 17:36:18 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:52.065 17:36:18 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:52.065 17:36:18 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:52.065 17:36:18 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:52.065 17:36:18 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:52.065 17:36:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:52.065 17:36:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:52.065 17:36:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:52.065 17:36:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:52.065 17:36:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:52.065 17:36:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:52.065 17:36:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.065 17:36:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:52.065 17:36:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:52.065 17:36:18 -- nvmf/common.sh@105 -- # continue 2 00:19:52.065 17:36:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:52.065 17:36:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.065 17:36:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:52.065 17:36:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.065 17:36:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:52.065 17:36:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:52.065 17:36:18 -- nvmf/common.sh@105 -- # continue 2 00:19:52.065 17:36:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:52.065 17:36:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:52.065 17:36:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:52.065 17:36:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:52.065 17:36:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:52.065 17:36:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:52.065 17:36:18 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:52.065 17:36:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:52.065 17:36:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:52.065 82: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:52.065 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:19:52.065 altname enp9s0f0np0 00:19:52.065 inet 192.168.100.8/24 scope global mlx_0_0 00:19:52.065 valid_lft forever preferred_lft forever 00:19:52.065 17:36:18 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:52.065 17:36:18 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:52.065 17:36:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:52.065 17:36:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:52.065 17:36:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:52.065 17:36:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:52.065 17:36:18 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:52.065 17:36:18 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:52.065 17:36:18 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:52.065 83: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:52.065 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:19:52.065 altname enp9s0f1np1 00:19:52.065 inet 192.168.100.9/24 scope global mlx_0_1 00:19:52.065 valid_lft forever preferred_lft forever 00:19:52.065 17:36:18 -- nvmf/common.sh@411 -- # return 0 00:19:52.065 17:36:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:52.065 17:36:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:52.065 17:36:18 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:52.065 17:36:18 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:52.065 17:36:18 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:52.065 17:36:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:52.065 17:36:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:52.065 17:36:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:52.065 17:36:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:52.065 17:36:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:52.065 17:36:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:52.065 17:36:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.065 17:36:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:52.065 17:36:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:52.065 17:36:18 -- nvmf/common.sh@105 -- # continue 2 00:19:52.065 17:36:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:52.065 17:36:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.065 17:36:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:52.065 17:36:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.065 17:36:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:52.065 17:36:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:52.065 17:36:18 -- nvmf/common.sh@105 -- # continue 2 00:19:52.065 17:36:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:52.065 17:36:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:52.065 17:36:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:52.065 17:36:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:52.065 17:36:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:52.065 17:36:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:52.065 17:36:18 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:52.065 17:36:18 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:52.065 17:36:18 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:52.065 17:36:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:52.065 17:36:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:52.065 17:36:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:52.065 17:36:18 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:52.065 192.168.100.9' 00:19:52.065 17:36:18 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:52.065 192.168.100.9' 00:19:52.065 17:36:18 -- nvmf/common.sh@446 -- # head -n 1 00:19:52.065 17:36:18 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:52.065 17:36:18 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:52.065 192.168.100.9' 00:19:52.065 17:36:18 -- nvmf/common.sh@447 -- # tail -n +2 00:19:52.065 17:36:18 -- nvmf/common.sh@447 -- # head -n 1 00:19:52.065 17:36:18 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:52.065 17:36:18 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:52.065 17:36:18 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:52.065 17:36:18 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:52.065 17:36:18 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:52.065 17:36:18 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:52.065 17:36:18 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:19:52.065 17:36:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:52.065 17:36:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:52.065 17:36:18 -- common/autotest_common.sh@10 -- # set +x 00:19:52.325 ************************************ 00:19:52.325 START TEST nvmf_target_disconnect_tc1 00:19:52.325 ************************************ 00:19:52.325 17:36:18 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:19:52.325 17:36:18 -- host/target_disconnect.sh@32 -- # set +e 00:19:52.325 17:36:18 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:52.325 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.325 [2024-04-18 17:36:18.728040] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:52.325 [2024-04-18 17:36:18.728109] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:52.325 [2024-04-18 17:36:18.728127] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7080 00:19:53.264 [2024-04-18 17:36:19.732246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:53.264 [2024-04-18 17:36:19.732307] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:19:53.264 [2024-04-18 17:36:19.732330] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:19:53.264 [2024-04-18 17:36:19.732405] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:53.264 [2024-04-18 17:36:19.732437] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:19:53.264 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:19:53.264 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:19:53.264 Initializing NVMe Controllers 00:19:53.264 17:36:19 -- host/target_disconnect.sh@33 -- # trap - ERR 00:19:53.264 17:36:19 -- host/target_disconnect.sh@33 -- # print_backtrace 00:19:53.264 17:36:19 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:19:53.264 17:36:19 -- common/autotest_common.sh@1139 -- # return 0 00:19:53.264 17:36:19 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:19:53.264 17:36:19 -- host/target_disconnect.sh@41 -- # set -e 00:19:53.264 00:19:53.264 real 0m1.114s 00:19:53.264 user 0m0.882s 00:19:53.264 sys 0m0.219s 00:19:53.264 17:36:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:53.264 17:36:19 -- common/autotest_common.sh@10 -- # set +x 00:19:53.264 ************************************ 00:19:53.264 END TEST nvmf_target_disconnect_tc1 00:19:53.264 ************************************ 00:19:53.264 17:36:19 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:19:53.264 17:36:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:53.264 17:36:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:53.264 17:36:19 -- common/autotest_common.sh@10 -- # set +x 00:19:53.523 ************************************ 00:19:53.523 START TEST nvmf_target_disconnect_tc2 00:19:53.523 ************************************ 00:19:53.523 17:36:19 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:19:53.523 17:36:19 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:19:53.523 17:36:19 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:19:53.523 17:36:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:53.523 17:36:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:53.523 17:36:19 -- common/autotest_common.sh@10 -- # set +x 00:19:53.523 17:36:19 -- nvmf/common.sh@470 -- # nvmfpid=2075000 00:19:53.523 17:36:19 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:19:53.523 17:36:19 -- nvmf/common.sh@471 -- # waitforlisten 2075000 00:19:53.523 17:36:19 -- common/autotest_common.sh@817 -- # '[' -z 2075000 ']' 00:19:53.523 17:36:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.523 17:36:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:53.523 17:36:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.523 17:36:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:53.523 17:36:19 -- common/autotest_common.sh@10 -- # set +x 00:19:53.523 [2024-04-18 17:36:19.920067] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:19:53.523 [2024-04-18 17:36:19.920155] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.523 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.523 [2024-04-18 17:36:19.985722] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:53.781 [2024-04-18 17:36:20.103331] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.781 [2024-04-18 17:36:20.103397] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.781 [2024-04-18 17:36:20.103420] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.781 [2024-04-18 17:36:20.103430] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.781 [2024-04-18 17:36:20.103440] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.781 [2024-04-18 17:36:20.103526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:53.781 [2024-04-18 17:36:20.103591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:53.781 [2024-04-18 17:36:20.103655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:53.781 [2024-04-18 17:36:20.103658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:54.346 17:36:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:54.346 17:36:20 -- common/autotest_common.sh@850 -- # return 0 00:19:54.346 17:36:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:54.346 17:36:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:54.346 17:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:54.346 17:36:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.346 17:36:20 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:54.346 17:36:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.346 17:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:54.604 Malloc0 00:19:54.604 17:36:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.604 17:36:20 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:19:54.604 17:36:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.604 17:36:20 -- common/autotest_common.sh@10 -- # set +x 00:19:54.604 [2024-04-18 17:36:20.930101] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11a1c00/0x11ad840) succeed. 00:19:54.604 [2024-04-18 17:36:20.941348] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11a31f0/0x124d940) succeed. 00:19:54.604 17:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.604 17:36:21 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:54.604 17:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.604 17:36:21 -- common/autotest_common.sh@10 -- # set +x 00:19:54.604 17:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.604 17:36:21 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:54.604 17:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.604 17:36:21 -- common/autotest_common.sh@10 -- # set +x 00:19:54.604 17:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.604 17:36:21 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:54.604 17:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.604 17:36:21 -- common/autotest_common.sh@10 -- # set +x 00:19:54.604 [2024-04-18 17:36:21.111500] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:54.604 17:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.604 17:36:21 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:54.604 17:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.604 17:36:21 -- common/autotest_common.sh@10 -- # set +x 00:19:54.604 17:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.604 17:36:21 -- host/target_disconnect.sh@50 -- # reconnectpid=2075162 00:19:54.604 17:36:21 -- host/target_disconnect.sh@52 -- # sleep 2 00:19:54.604 17:36:21 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:19:54.863 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.767 17:36:23 -- host/target_disconnect.sh@53 -- # kill -9 2075000 00:19:56.767 17:36:23 -- host/target_disconnect.sh@55 -- # sleep 2 00:19:58.144 Read completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Read completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Read completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Read completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Read completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Read completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Read completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Read completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Read completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Read completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Read completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Read completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Read completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Write completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Read completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 Read completed with error (sct=0, sc=8) 00:19:58.145 starting I/O failed 00:19:58.145 [2024-04-18 17:36:24.285889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:19:58.145 [2024-04-18 17:36:24.288078] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:58.145 [2024-04-18 17:36:24.288122] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:58.145 [2024-04-18 17:36:24.288136] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:19:58.711 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2075000 Killed "${NVMF_APP[@]}" "$@" 00:19:58.711 17:36:25 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:19:58.711 17:36:25 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:19:58.711 17:36:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:58.711 17:36:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:58.711 17:36:25 -- common/autotest_common.sh@10 -- # set +x 00:19:58.711 17:36:25 -- nvmf/common.sh@470 -- # nvmfpid=2075682 00:19:58.711 17:36:25 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:19:58.711 17:36:25 -- nvmf/common.sh@471 -- # waitforlisten 2075682 00:19:58.711 17:36:25 -- common/autotest_common.sh@817 -- # '[' -z 2075682 ']' 00:19:58.711 17:36:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.711 17:36:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:58.711 17:36:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.711 17:36:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:58.711 17:36:25 -- common/autotest_common.sh@10 -- # set +x 00:19:58.711 [2024-04-18 17:36:25.175936] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:19:58.711 [2024-04-18 17:36:25.176025] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.711 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.711 [2024-04-18 17:36:25.239964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:58.970 [2024-04-18 17:36:25.292241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:19:58.970 qpair failed and we were unable to recover it. 00:19:58.970 [2024-04-18 17:36:25.294327] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:58.970 [2024-04-18 17:36:25.294364] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:58.970 [2024-04-18 17:36:25.294378] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:19:58.970 [2024-04-18 17:36:25.348621] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.970 [2024-04-18 17:36:25.348699] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.970 [2024-04-18 17:36:25.348714] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.970 [2024-04-18 17:36:25.348727] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.970 [2024-04-18 17:36:25.348737] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.970 [2024-04-18 17:36:25.348830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:58.970 [2024-04-18 17:36:25.348882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:58.970 [2024-04-18 17:36:25.348952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:58.970 [2024-04-18 17:36:25.348955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:58.970 17:36:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:58.970 17:36:25 -- common/autotest_common.sh@850 -- # return 0 00:19:58.970 17:36:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:58.970 17:36:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:58.970 17:36:25 -- common/autotest_common.sh@10 -- # set +x 00:19:58.970 17:36:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.970 17:36:25 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:58.970 17:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.970 17:36:25 -- common/autotest_common.sh@10 -- # set +x 00:19:59.229 Malloc0 00:19:59.230 17:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.230 17:36:25 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:19:59.230 17:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.230 17:36:25 -- common/autotest_common.sh@10 -- # set +x 00:19:59.230 [2024-04-18 17:36:25.553573] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf6bc00/0xf77840) succeed. 00:19:59.230 [2024-04-18 17:36:25.564706] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf6d1f0/0x1017940) succeed. 00:19:59.230 17:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.230 17:36:25 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:59.230 17:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.230 17:36:25 -- common/autotest_common.sh@10 -- # set +x 00:19:59.230 17:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.230 17:36:25 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:59.230 17:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.230 17:36:25 -- common/autotest_common.sh@10 -- # set +x 00:19:59.230 17:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.230 17:36:25 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:59.230 17:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.230 17:36:25 -- common/autotest_common.sh@10 -- # set +x 00:19:59.230 [2024-04-18 17:36:25.739424] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:59.230 17:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.230 17:36:25 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:19:59.230 17:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.230 17:36:25 -- common/autotest_common.sh@10 -- # set +x 00:19:59.230 17:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.230 17:36:25 -- host/target_disconnect.sh@58 -- # wait 2075162 00:19:59.798 [2024-04-18 17:36:26.298574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:19:59.798 qpair failed and we were unable to recover it. 00:20:01.172 Read completed with error (sct=0, sc=8) 00:20:01.172 starting I/O failed 00:20:01.172 Read completed with error (sct=0, sc=8) 00:20:01.172 starting I/O failed 00:20:01.172 Read completed with error (sct=0, sc=8) 00:20:01.172 starting I/O failed 00:20:01.172 Write completed with error (sct=0, sc=8) 00:20:01.172 starting I/O failed 00:20:01.172 Read completed with error (sct=0, sc=8) 00:20:01.172 starting I/O failed 00:20:01.172 Write completed with error (sct=0, sc=8) 00:20:01.172 starting I/O failed 00:20:01.172 Read completed with error (sct=0, sc=8) 00:20:01.172 starting I/O failed 00:20:01.172 Write completed with error (sct=0, sc=8) 00:20:01.172 starting I/O failed 00:20:01.172 Read completed with error (sct=0, sc=8) 00:20:01.172 starting I/O failed 00:20:01.172 Write completed with error (sct=0, sc=8) 00:20:01.172 starting I/O failed 00:20:01.172 Write completed with error (sct=0, sc=8) 00:20:01.172 starting I/O failed 00:20:01.172 Read completed with error (sct=0, sc=8) 00:20:01.172 starting I/O failed 00:20:01.172 Write completed with error (sct=0, sc=8) 00:20:01.172 starting I/O failed 00:20:01.172 Write completed with error (sct=0, sc=8) 00:20:01.172 starting I/O failed 00:20:01.172 Read completed with error (sct=0, sc=8) 00:20:01.172 starting I/O failed 00:20:01.173 Read completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 Read completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 Read completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 Read completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 Read completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 Write completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 Write completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 Write completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 Write completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 Read completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 Write completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 Write completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 Read completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 Read completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 Write completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 Read completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 Write completed with error (sct=0, sc=8) 00:20:01.173 starting I/O failed 00:20:01.173 [2024-04-18 17:36:27.303454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:01.173 [2024-04-18 17:36:27.303489] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:20:01.173 A controller has encountered a failure and is being reset. 00:20:01.173 [2024-04-18 17:36:27.303558] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:20:01.173 [2024-04-18 17:36:27.305020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:01.173 Controller properly reset. 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Read completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Read completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Read completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Read completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Read completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Read completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Read completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Read completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Read completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Read completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Read completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Read completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 Write completed with error (sct=0, sc=8) 00:20:02.113 starting I/O failed 00:20:02.113 [2024-04-18 17:36:28.336572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:03.048 Read completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Read completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Read completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Read completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Read completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Read completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Write completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Write completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Write completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Write completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Read completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Read completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Read completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Write completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Write completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Read completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Write completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Read completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Read completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Read completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Write completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Write completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Write completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Write completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Write completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Write completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Write completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Write completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Read completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Read completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Write completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 Read completed with error (sct=0, sc=8) 00:20:03.048 starting I/O failed 00:20:03.048 [2024-04-18 17:36:29.341036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:04.952 Initializing NVMe Controllers 00:20:04.952 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:04.952 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:04.952 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:20:04.952 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:20:04.952 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:20:04.952 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:20:04.952 Initialization complete. Launching workers. 00:20:04.952 Starting thread on core 1 00:20:04.952 Starting thread on core 2 00:20:04.952 Starting thread on core 3 00:20:04.952 Starting thread on core 0 00:20:04.952 17:36:31 -- host/target_disconnect.sh@59 -- # sync 00:20:04.952 00:20:04.952 real 0m11.504s 00:20:04.952 user 0m36.836s 00:20:04.952 sys 0m2.302s 00:20:04.952 17:36:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:04.952 17:36:31 -- common/autotest_common.sh@10 -- # set +x 00:20:04.952 ************************************ 00:20:04.952 END TEST nvmf_target_disconnect_tc2 00:20:04.952 ************************************ 00:20:04.952 17:36:31 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:20:04.952 17:36:31 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:20:04.952 17:36:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:04.952 17:36:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:04.952 17:36:31 -- common/autotest_common.sh@10 -- # set +x 00:20:05.211 ************************************ 00:20:05.211 START TEST nvmf_target_disconnect_tc3 00:20:05.211 ************************************ 00:20:05.211 17:36:31 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc3 00:20:05.211 17:36:31 -- host/target_disconnect.sh@65 -- # reconnectpid=2076392 00:20:05.211 17:36:31 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:20:05.211 17:36:31 -- host/target_disconnect.sh@67 -- # sleep 2 00:20:05.211 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.117 17:36:33 -- host/target_disconnect.sh@68 -- # kill -9 2075682 00:20:07.117 17:36:33 -- host/target_disconnect.sh@70 -- # sleep 2 00:20:08.493 Write completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Write completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Write completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Write completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Write completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Write completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Write completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Write completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Write completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Write completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Write completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Write completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Read completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 Write completed with error (sct=0, sc=8) 00:20:08.493 starting I/O failed 00:20:08.493 [2024-04-18 17:36:34.680390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:09.062 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 2075682 Killed "${NVMF_APP[@]}" "$@" 00:20:09.062 17:36:35 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:20:09.062 17:36:35 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:09.062 17:36:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:09.062 17:36:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:09.062 17:36:35 -- common/autotest_common.sh@10 -- # set +x 00:20:09.062 17:36:35 -- nvmf/common.sh@470 -- # nvmfpid=2076918 00:20:09.062 17:36:35 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:09.062 17:36:35 -- nvmf/common.sh@471 -- # waitforlisten 2076918 00:20:09.062 17:36:35 -- common/autotest_common.sh@817 -- # '[' -z 2076918 ']' 00:20:09.062 17:36:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.062 17:36:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:09.062 17:36:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.062 17:36:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:09.062 17:36:35 -- common/autotest_common.sh@10 -- # set +x 00:20:09.062 [2024-04-18 17:36:35.561081] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:20:09.062 [2024-04-18 17:36:35.561153] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.062 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.322 [2024-04-18 17:36:35.625418] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:09.322 Read completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Read completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Read completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Read completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Read completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Write completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Write completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Write completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Write completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Write completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Write completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Write completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Read completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Read completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Write completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Read completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Write completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Write completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Write completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Write completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Read completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Read completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Read completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Write completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Read completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Write completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Read completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Write completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Read completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Read completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Read completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 Write completed with error (sct=0, sc=8) 00:20:09.322 starting I/O failed 00:20:09.322 [2024-04-18 17:36:35.685489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:09.322 [2024-04-18 17:36:35.734554] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.322 [2024-04-18 17:36:35.734611] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.322 [2024-04-18 17:36:35.734636] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.322 [2024-04-18 17:36:35.734650] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.322 [2024-04-18 17:36:35.734676] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.323 [2024-04-18 17:36:35.734751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:09.323 [2024-04-18 17:36:35.734809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:09.323 [2024-04-18 17:36:35.734879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:20:09.323 [2024-04-18 17:36:35.734882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:09.582 17:36:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:09.582 17:36:35 -- common/autotest_common.sh@850 -- # return 0 00:20:09.582 17:36:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:09.582 17:36:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:09.582 17:36:35 -- common/autotest_common.sh@10 -- # set +x 00:20:09.582 17:36:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.582 17:36:35 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:09.582 17:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.582 17:36:35 -- common/autotest_common.sh@10 -- # set +x 00:20:09.582 Malloc0 00:20:09.582 17:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.582 17:36:35 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:09.582 17:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.582 17:36:35 -- common/autotest_common.sh@10 -- # set +x 00:20:09.582 [2024-04-18 17:36:35.951430] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23c9c00/0x23d5840) succeed. 00:20:09.582 [2024-04-18 17:36:35.962562] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23cb1f0/0x2475940) succeed. 00:20:09.582 17:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.582 17:36:36 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:09.582 17:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.582 17:36:36 -- common/autotest_common.sh@10 -- # set +x 00:20:09.582 17:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.582 17:36:36 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:09.582 17:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.582 17:36:36 -- common/autotest_common.sh@10 -- # set +x 00:20:09.841 17:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.841 17:36:36 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:20:09.841 17:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.841 17:36:36 -- common/autotest_common.sh@10 -- # set +x 00:20:09.841 [2024-04-18 17:36:36.137087] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:20:09.841 17:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.841 17:36:36 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:20:09.841 17:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.841 17:36:36 -- common/autotest_common.sh@10 -- # set +x 00:20:09.841 17:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.841 17:36:36 -- host/target_disconnect.sh@73 -- # wait 2076392 00:20:10.408 Write completed with error (sct=0, sc=8) 00:20:10.408 starting I/O failed 00:20:10.408 Read completed with error (sct=0, sc=8) 00:20:10.408 starting I/O failed 00:20:10.408 Write completed with error (sct=0, sc=8) 00:20:10.408 starting I/O failed 00:20:10.408 Write completed with error (sct=0, sc=8) 00:20:10.408 starting I/O failed 00:20:10.408 Write completed with error (sct=0, sc=8) 00:20:10.408 starting I/O failed 00:20:10.408 Read completed with error (sct=0, sc=8) 00:20:10.408 starting I/O failed 00:20:10.408 Write completed with error (sct=0, sc=8) 00:20:10.408 starting I/O failed 00:20:10.408 Read completed with error (sct=0, sc=8) 00:20:10.408 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Read completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Read completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Read completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Read completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Read completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Read completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 Write completed with error (sct=0, sc=8) 00:20:10.409 starting I/O failed 00:20:10.409 [2024-04-18 17:36:36.690297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:11.346 Read completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Read completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Write completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Write completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Write completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Read completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Write completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Read completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Write completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Write completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Write completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Write completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Read completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Write completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Write completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Write completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Read completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Read completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Read completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Write completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.346 Write completed with error (sct=0, sc=8) 00:20:11.346 starting I/O failed 00:20:11.347 Write completed with error (sct=0, sc=8) 00:20:11.347 starting I/O failed 00:20:11.347 Read completed with error (sct=0, sc=8) 00:20:11.347 starting I/O failed 00:20:11.347 Read completed with error (sct=0, sc=8) 00:20:11.347 starting I/O failed 00:20:11.347 Read completed with error (sct=0, sc=8) 00:20:11.347 starting I/O failed 00:20:11.347 Read completed with error (sct=0, sc=8) 00:20:11.347 starting I/O failed 00:20:11.347 Write completed with error (sct=0, sc=8) 00:20:11.347 starting I/O failed 00:20:11.347 Read completed with error (sct=0, sc=8) 00:20:11.347 starting I/O failed 00:20:11.347 Write completed with error (sct=0, sc=8) 00:20:11.347 starting I/O failed 00:20:11.347 Read completed with error (sct=0, sc=8) 00:20:11.347 starting I/O failed 00:20:11.347 Write completed with error (sct=0, sc=8) 00:20:11.347 starting I/O failed 00:20:11.347 Write completed with error (sct=0, sc=8) 00:20:11.347 starting I/O failed 00:20:11.347 [2024-04-18 17:36:37.695239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:11.347 [2024-04-18 17:36:37.695288] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:20:11.347 A controller has encountered a failure and is being reset. 00:20:11.347 Resorting to new failover address 192.168.100.9 00:20:11.347 [2024-04-18 17:36:37.695337] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:11.347 [2024-04-18 17:36:37.695376] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:20:11.347 [2024-04-18 17:36:37.711904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:11.347 Controller properly reset. 00:20:15.573 Initializing NVMe Controllers 00:20:15.573 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:15.573 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:15.573 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:20:15.573 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:20:15.573 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:20:15.573 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:20:15.573 Initialization complete. Launching workers. 00:20:15.573 Starting thread on core 1 00:20:15.573 Starting thread on core 2 00:20:15.573 Starting thread on core 3 00:20:15.573 Starting thread on core 0 00:20:15.573 17:36:41 -- host/target_disconnect.sh@74 -- # sync 00:20:15.573 00:20:15.573 real 0m10.266s 00:20:15.573 user 0m59.342s 00:20:15.573 sys 0m1.691s 00:20:15.573 17:36:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:15.573 17:36:41 -- common/autotest_common.sh@10 -- # set +x 00:20:15.573 ************************************ 00:20:15.573 END TEST nvmf_target_disconnect_tc3 00:20:15.573 ************************************ 00:20:15.573 17:36:41 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:20:15.573 17:36:41 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:20:15.573 17:36:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:15.573 17:36:41 -- nvmf/common.sh@117 -- # sync 00:20:15.573 17:36:41 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:15.573 17:36:41 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:15.573 17:36:41 -- nvmf/common.sh@120 -- # set +e 00:20:15.573 17:36:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.573 17:36:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:15.573 rmmod nvme_rdma 00:20:15.573 rmmod nvme_fabrics 00:20:15.573 17:36:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.573 17:36:41 -- nvmf/common.sh@124 -- # set -e 00:20:15.573 17:36:41 -- nvmf/common.sh@125 -- # return 0 00:20:15.573 17:36:41 -- nvmf/common.sh@478 -- # '[' -n 2076918 ']' 00:20:15.573 17:36:41 -- nvmf/common.sh@479 -- # killprocess 2076918 00:20:15.573 17:36:41 -- common/autotest_common.sh@936 -- # '[' -z 2076918 ']' 00:20:15.573 17:36:41 -- common/autotest_common.sh@940 -- # kill -0 2076918 00:20:15.573 17:36:41 -- common/autotest_common.sh@941 -- # uname 00:20:15.573 17:36:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:15.573 17:36:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2076918 00:20:15.573 17:36:41 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:20:15.573 17:36:41 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:20:15.573 17:36:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2076918' 00:20:15.573 killing process with pid 2076918 00:20:15.573 17:36:41 -- common/autotest_common.sh@955 -- # kill 2076918 00:20:15.573 17:36:41 -- common/autotest_common.sh@960 -- # wait 2076918 00:20:15.832 17:36:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:15.832 17:36:42 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:20:15.832 00:20:15.832 real 0m25.812s 00:20:15.832 user 2m3.315s 00:20:15.832 sys 0m6.186s 00:20:15.832 17:36:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:15.832 17:36:42 -- common/autotest_common.sh@10 -- # set +x 00:20:15.832 ************************************ 00:20:15.832 END TEST nvmf_target_disconnect 00:20:15.832 ************************************ 00:20:15.832 17:36:42 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:20:15.832 17:36:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:15.832 17:36:42 -- common/autotest_common.sh@10 -- # set +x 00:20:15.832 17:36:42 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:20:15.832 00:20:15.832 real 14m40.327s 00:20:15.832 user 45m36.295s 00:20:15.832 sys 2m10.680s 00:20:15.832 17:36:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:15.832 17:36:42 -- common/autotest_common.sh@10 -- # set +x 00:20:15.832 ************************************ 00:20:15.832 END TEST nvmf_rdma 00:20:15.832 ************************************ 00:20:15.832 17:36:42 -- spdk/autotest.sh@283 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:20:15.832 17:36:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:15.832 17:36:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:15.832 17:36:42 -- common/autotest_common.sh@10 -- # set +x 00:20:16.091 ************************************ 00:20:16.091 START TEST spdkcli_nvmf_rdma 00:20:16.091 ************************************ 00:20:16.091 17:36:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:20:16.091 * Looking for test storage... 00:20:16.091 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:20:16.091 17:36:42 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:20:16.091 17:36:42 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:20:16.091 17:36:42 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:20:16.091 17:36:42 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.091 17:36:42 -- nvmf/common.sh@7 -- # uname -s 00:20:16.091 17:36:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.091 17:36:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.091 17:36:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.091 17:36:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.091 17:36:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.091 17:36:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.091 17:36:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.091 17:36:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.091 17:36:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.091 17:36:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.092 17:36:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.092 17:36:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.092 17:36:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.092 17:36:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.092 17:36:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.092 17:36:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.092 17:36:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:16.092 17:36:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.092 17:36:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.092 17:36:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.092 17:36:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.092 17:36:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.092 17:36:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.092 17:36:42 -- paths/export.sh@5 -- # export PATH 00:20:16.092 17:36:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.092 17:36:42 -- nvmf/common.sh@47 -- # : 0 00:20:16.092 17:36:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:16.092 17:36:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:16.092 17:36:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.092 17:36:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.092 17:36:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.092 17:36:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:16.092 17:36:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:16.092 17:36:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:16.092 17:36:42 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:20:16.092 17:36:42 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:20:16.092 17:36:42 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:20:16.092 17:36:42 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:20:16.092 17:36:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:16.092 17:36:42 -- common/autotest_common.sh@10 -- # set +x 00:20:16.092 17:36:42 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:20:16.092 17:36:42 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2077836 00:20:16.092 17:36:42 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:20:16.092 17:36:42 -- spdkcli/common.sh@34 -- # waitforlisten 2077836 00:20:16.092 17:36:42 -- common/autotest_common.sh@817 -- # '[' -z 2077836 ']' 00:20:16.092 17:36:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.092 17:36:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:16.092 17:36:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.092 17:36:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:16.092 17:36:42 -- common/autotest_common.sh@10 -- # set +x 00:20:16.092 [2024-04-18 17:36:42.519451] Starting SPDK v24.05-pre git sha1 bd2a0934e / DPDK 23.11.0 initialization... 00:20:16.092 [2024-04-18 17:36:42.519531] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077836 ] 00:20:16.092 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.092 [2024-04-18 17:36:42.576149] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:16.350 [2024-04-18 17:36:42.683861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.350 [2024-04-18 17:36:42.683866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.350 17:36:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:16.350 17:36:42 -- common/autotest_common.sh@850 -- # return 0 00:20:16.350 17:36:42 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:20:16.350 17:36:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:16.350 17:36:42 -- common/autotest_common.sh@10 -- # set +x 00:20:16.350 17:36:42 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:20:16.350 17:36:42 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:20:16.350 17:36:42 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:20:16.350 17:36:42 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:20:16.351 17:36:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.351 17:36:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:16.351 17:36:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:16.351 17:36:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:16.351 17:36:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.351 17:36:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:16.351 17:36:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.351 17:36:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:16.351 17:36:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:16.351 17:36:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:16.351 17:36:42 -- common/autotest_common.sh@10 -- # set +x 00:20:18.254 17:36:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:18.254 17:36:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:18.254 17:36:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:18.254 17:36:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:18.254 17:36:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:18.254 17:36:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:18.254 17:36:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:18.254 17:36:44 -- nvmf/common.sh@295 -- # net_devs=() 00:20:18.254 17:36:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:18.254 17:36:44 -- nvmf/common.sh@296 -- # e810=() 00:20:18.254 17:36:44 -- nvmf/common.sh@296 -- # local -ga e810 00:20:18.254 17:36:44 -- nvmf/common.sh@297 -- # x722=() 00:20:18.254 17:36:44 -- nvmf/common.sh@297 -- # local -ga x722 00:20:18.254 17:36:44 -- nvmf/common.sh@298 -- # mlx=() 00:20:18.254 17:36:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:18.254 17:36:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.254 17:36:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.254 17:36:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.254 17:36:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.254 17:36:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.254 17:36:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.254 17:36:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.254 17:36:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.254 17:36:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.254 17:36:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.254 17:36:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.254 17:36:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:18.254 17:36:44 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:18.254 17:36:44 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:18.254 17:36:44 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:18.254 17:36:44 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:18.254 17:36:44 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:18.254 17:36:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:18.254 17:36:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.254 17:36:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x15b3 - 0x1017)' 00:20:18.254 Found 0000:09:00.0 (0x15b3 - 0x1017) 00:20:18.254 17:36:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:18.254 17:36:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:18.254 17:36:44 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:20:18.254 17:36:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:18.254 17:36:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.254 17:36:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x15b3 - 0x1017)' 00:20:18.254 Found 0000:09:00.1 (0x15b3 - 0x1017) 00:20:18.254 17:36:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:18.254 17:36:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:18.254 17:36:44 -- nvmf/common.sh@350 -- # [[ 0x1017 == \0\x\1\0\1\7 ]] 00:20:18.254 17:36:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:18.254 17:36:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:18.254 17:36:44 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:18.254 17:36:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.254 17:36:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.254 17:36:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:18.254 17:36:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.254 17:36:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.0: mlx_0_0' 00:20:18.254 Found net devices under 0000:09:00.0: mlx_0_0 00:20:18.254 17:36:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.254 17:36:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.254 17:36:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.254 17:36:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:18.254 17:36:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.254 17:36:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:09:00.1: mlx_0_1' 00:20:18.254 Found net devices under 0000:09:00.1: mlx_0_1 00:20:18.254 17:36:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.254 17:36:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:18.254 17:36:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:18.254 17:36:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:18.254 17:36:44 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:20:18.254 17:36:44 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:20:18.254 17:36:44 -- nvmf/common.sh@409 -- # rdma_device_init 00:20:18.254 17:36:44 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:20:18.254 17:36:44 -- nvmf/common.sh@58 -- # uname 00:20:18.254 17:36:44 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:18.254 17:36:44 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:18.254 17:36:44 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:18.254 17:36:44 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:18.254 17:36:44 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:18.254 17:36:44 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:18.254 17:36:44 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:18.254 17:36:44 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:18.254 17:36:44 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:20:18.254 17:36:44 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:18.254 17:36:44 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:18.254 17:36:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:18.254 17:36:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:18.255 17:36:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:18.255 17:36:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:18.255 17:36:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:18.255 17:36:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:18.255 17:36:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:18.255 17:36:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:18.255 17:36:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:18.255 17:36:44 -- nvmf/common.sh@105 -- # continue 2 00:20:18.255 17:36:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:18.255 17:36:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:18.255 17:36:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:18.255 17:36:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:18.255 17:36:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:18.255 17:36:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:18.255 17:36:44 -- nvmf/common.sh@105 -- # continue 2 00:20:18.255 17:36:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:18.255 17:36:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:18.255 17:36:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:18.255 17:36:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:18.255 17:36:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:18.255 17:36:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:18.255 17:36:44 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:18.255 17:36:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:18.255 17:36:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:18.255 82: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:18.255 link/ether b8:59:9f:af:fe:10 brd ff:ff:ff:ff:ff:ff 00:20:18.255 altname enp9s0f0np0 00:20:18.255 inet 192.168.100.8/24 scope global mlx_0_0 00:20:18.255 valid_lft forever preferred_lft forever 00:20:18.255 17:36:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:18.255 17:36:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:18.255 17:36:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:18.255 17:36:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:18.255 17:36:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:18.255 17:36:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:18.255 17:36:44 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:18.255 17:36:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:18.255 17:36:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:18.255 83: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:18.255 link/ether b8:59:9f:af:fe:11 brd ff:ff:ff:ff:ff:ff 00:20:18.255 altname enp9s0f1np1 00:20:18.255 inet 192.168.100.9/24 scope global mlx_0_1 00:20:18.255 valid_lft forever preferred_lft forever 00:20:18.255 17:36:44 -- nvmf/common.sh@411 -- # return 0 00:20:18.255 17:36:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:18.255 17:36:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:18.255 17:36:44 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:20:18.255 17:36:44 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:20:18.255 17:36:44 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:18.255 17:36:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:18.255 17:36:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:18.255 17:36:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:18.255 17:36:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:18.255 17:36:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:18.255 17:36:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:18.255 17:36:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:18.255 17:36:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:18.255 17:36:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:18.255 17:36:44 -- nvmf/common.sh@105 -- # continue 2 00:20:18.255 17:36:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:18.255 17:36:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:18.255 17:36:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:18.255 17:36:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:18.255 17:36:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:18.255 17:36:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:18.255 17:36:44 -- nvmf/common.sh@105 -- # continue 2 00:20:18.255 17:36:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:18.255 17:36:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:18.255 17:36:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:18.255 17:36:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:18.255 17:36:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:18.255 17:36:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:18.255 17:36:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:18.255 17:36:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:18.255 17:36:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:18.255 17:36:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:18.255 17:36:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:18.255 17:36:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:18.255 17:36:44 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:20:18.255 192.168.100.9' 00:20:18.255 17:36:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:18.255 192.168.100.9' 00:20:18.255 17:36:44 -- nvmf/common.sh@446 -- # head -n 1 00:20:18.255 17:36:44 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:18.255 17:36:44 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:20:18.255 192.168.100.9' 00:20:18.255 17:36:44 -- nvmf/common.sh@447 -- # tail -n +2 00:20:18.255 17:36:44 -- nvmf/common.sh@447 -- # head -n 1 00:20:18.255 17:36:44 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:18.255 17:36:44 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:20:18.255 17:36:44 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:18.255 17:36:44 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:20:18.255 17:36:44 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:20:18.255 17:36:44 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:20:18.255 17:36:44 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:20:18.255 17:36:44 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:20:18.255 17:36:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:18.255 17:36:44 -- common/autotest_common.sh@10 -- # set +x 00:20:18.255 17:36:44 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:18.255 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:18.255 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:20:18.255 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:20:18.255 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:20:18.255 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:20:18.255 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:20:18.255 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:20:18.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:20:18.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:20:18.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:20:18.255 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:18.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:20:18.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:20:18.255 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:18.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:20:18.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:20:18.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:20:18.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:20:18.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:18.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:20:18.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:20:18.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:20:18.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:20:18.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:18.255 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:20:18.256 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:20:18.256 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:20:18.256 ' 00:20:18.823 [2024-04-18 17:36:45.148435] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:21.381 [2024-04-18 17:36:47.332649] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6dc5e0/0x82e580) succeed. 00:20:21.381 [2024-04-18 17:36:47.347363] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6ddb30/0x6ee380) succeed. 00:20:22.321 [2024-04-18 17:36:48.656884] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:20:24.856 [2024-04-18 17:36:50.944348] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:20:26.762 [2024-04-18 17:36:52.915101] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:20:28.140 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:20:28.140 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:20:28.140 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:20:28.141 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:20:28.141 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:20:28.141 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:20:28.141 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:20:28.141 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:20:28.141 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:20:28.141 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:20:28.141 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:20:28.141 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:20:28.141 17:36:54 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:20:28.141 17:36:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:28.141 17:36:54 -- common/autotest_common.sh@10 -- # set +x 00:20:28.141 17:36:54 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:20:28.141 17:36:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:28.141 17:36:54 -- common/autotest_common.sh@10 -- # set +x 00:20:28.141 17:36:54 -- spdkcli/nvmf.sh@69 -- # check_match 00:20:28.141 17:36:54 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:20:28.399 17:36:54 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:20:28.657 17:36:54 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:20:28.657 17:36:54 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:20:28.657 17:36:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:28.657 17:36:54 -- common/autotest_common.sh@10 -- # set +x 00:20:28.657 17:36:54 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:20:28.657 17:36:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:28.657 17:36:54 -- common/autotest_common.sh@10 -- # set +x 00:20:28.657 17:36:54 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:20:28.657 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:20:28.657 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:20:28.657 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:20:28.657 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:20:28.657 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:20:28.657 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:20:28.657 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:20:28.657 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:20:28.657 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:20:28.657 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:20:28.657 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:20:28.657 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:20:28.657 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:20:28.657 ' 00:20:33.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:20:33.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:20:33.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:20:33.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:20:33.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:20:33.934 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:20:33.934 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:20:33.934 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:20:33.934 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:20:33.934 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:20:33.934 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:20:33.934 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:20:33.934 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:20:33.934 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:20:33.934 17:37:00 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:20:33.934 17:37:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:33.934 17:37:00 -- common/autotest_common.sh@10 -- # set +x 00:20:33.934 17:37:00 -- spdkcli/nvmf.sh@90 -- # killprocess 2077836 00:20:33.934 17:37:00 -- common/autotest_common.sh@936 -- # '[' -z 2077836 ']' 00:20:33.934 17:37:00 -- common/autotest_common.sh@940 -- # kill -0 2077836 00:20:33.934 17:37:00 -- common/autotest_common.sh@941 -- # uname 00:20:33.934 17:37:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:33.934 17:37:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2077836 00:20:33.934 17:37:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:33.934 17:37:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:33.934 17:37:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2077836' 00:20:33.934 killing process with pid 2077836 00:20:33.934 17:37:00 -- common/autotest_common.sh@955 -- # kill 2077836 00:20:33.934 [2024-04-18 17:37:00.236576] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:33.934 17:37:00 -- common/autotest_common.sh@960 -- # wait 2077836 00:20:34.194 17:37:00 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:20:34.194 17:37:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:34.194 17:37:00 -- nvmf/common.sh@117 -- # sync 00:20:34.194 17:37:00 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:34.194 17:37:00 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:34.194 17:37:00 -- nvmf/common.sh@120 -- # set +e 00:20:34.194 17:37:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:34.194 17:37:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:34.194 rmmod nvme_rdma 00:20:34.194 rmmod nvme_fabrics 00:20:34.194 17:37:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:34.194 17:37:00 -- nvmf/common.sh@124 -- # set -e 00:20:34.194 17:37:00 -- nvmf/common.sh@125 -- # return 0 00:20:34.194 17:37:00 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:20:34.194 17:37:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:34.194 17:37:00 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:20:34.194 00:20:34.194 real 0m18.181s 00:20:34.194 user 0m38.684s 00:20:34.194 sys 0m2.156s 00:20:34.194 17:37:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:34.194 17:37:00 -- common/autotest_common.sh@10 -- # set +x 00:20:34.194 ************************************ 00:20:34.194 END TEST spdkcli_nvmf_rdma 00:20:34.194 ************************************ 00:20:34.194 17:37:00 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:20:34.194 17:37:00 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:20:34.194 17:37:00 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:20:34.194 17:37:00 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:34.194 17:37:00 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:20:34.194 17:37:00 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:34.194 17:37:00 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:20:34.194 17:37:00 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:20:34.194 17:37:00 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:20:34.194 17:37:00 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:34.194 17:37:00 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:20:34.194 17:37:00 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:20:34.194 17:37:00 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:20:34.194 17:37:00 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:20:34.194 17:37:00 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:20:34.194 17:37:00 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:20:34.194 17:37:00 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:20:34.194 17:37:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:34.194 17:37:00 -- common/autotest_common.sh@10 -- # set +x 00:20:34.194 17:37:00 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:20:34.194 17:37:00 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:20:34.194 17:37:00 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:20:34.194 17:37:00 -- common/autotest_common.sh@10 -- # set +x 00:20:36.154 INFO: APP EXITING 00:20:36.154 INFO: killing all VMs 00:20:36.154 INFO: killing vhost app 00:20:36.154 WARN: no vhost pid file found 00:20:36.154 INFO: EXIT DONE 00:20:37.090 Waiting for block devices as requested 00:20:37.091 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:20:37.091 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:20:37.091 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:20:37.091 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:20:37.350 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:20:37.350 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:20:37.350 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:20:37.350 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:20:37.609 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:20:37.609 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:20:37.609 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:20:37.609 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:20:37.868 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:20:37.868 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:20:37.868 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:20:37.868 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:20:38.127 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:20:39.505 Cleaning 00:20:39.505 Removing: /var/run/dpdk/spdk0/config 00:20:39.505 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:39.505 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:39.505 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:39.505 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:39.505 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:20:39.505 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:20:39.505 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:20:39.505 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:20:39.505 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:39.505 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:39.505 Removing: /var/run/dpdk/spdk1/config 00:20:39.505 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:39.505 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:39.505 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:39.505 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:39.505 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:20:39.505 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:20:39.505 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:20:39.505 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:20:39.505 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:39.505 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:39.505 Removing: /var/run/dpdk/spdk1/mp_socket 00:20:39.505 Removing: /var/run/dpdk/spdk2/config 00:20:39.505 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:39.505 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:39.505 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:39.505 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:39.505 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:20:39.505 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:20:39.505 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:20:39.505 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:20:39.505 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:39.505 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:39.505 Removing: /var/run/dpdk/spdk3/config 00:20:39.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:39.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:39.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:39.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:39.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:20:39.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:20:39.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:20:39.506 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:20:39.506 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:39.506 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:39.506 Removing: /var/run/dpdk/spdk4/config 00:20:39.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:39.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:39.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:39.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:39.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:20:39.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:20:39.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:20:39.506 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:20:39.506 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:39.506 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:39.506 Removing: /dev/shm/bdevperf_trace.pid1965515 00:20:39.506 Removing: /dev/shm/bdevperf_trace.pid2031835 00:20:39.506 Removing: /dev/shm/bdev_svc_trace.1 00:20:39.506 Removing: /dev/shm/nvmf_trace.0 00:20:39.506 Removing: /dev/shm/spdk_tgt_trace.pid1888626 00:20:39.506 Removing: /var/run/dpdk/spdk0 00:20:39.506 Removing: /var/run/dpdk/spdk1 00:20:39.506 Removing: /var/run/dpdk/spdk2 00:20:39.506 Removing: /var/run/dpdk/spdk3 00:20:39.506 Removing: /var/run/dpdk/spdk4 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1886906 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1887672 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1888626 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1889110 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1889810 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1890082 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1890807 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1890824 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1891083 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1893950 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1894936 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1895259 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1895587 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1895794 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1896005 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1896177 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1896453 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1896645 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1897106 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1899579 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1899750 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1899916 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1899935 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1900840 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1900886 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1901310 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1901351 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1901616 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1901629 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1901918 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1901928 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1902378 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1902600 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1902801 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1903110 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1903142 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1903354 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1903518 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1903801 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1903965 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1904249 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1904418 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1904593 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1904861 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1905031 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1905305 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1905479 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1905658 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1905922 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1906093 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1906365 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1906537 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1906782 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1906989 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1907160 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1907435 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1907604 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1907804 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1908025 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1910257 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1942333 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1944685 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1950559 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1953593 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1955425 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1955956 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1965515 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1965669 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1968051 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1971744 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1974311 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1980176 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1995531 00:20:39.506 Removing: /var/run/dpdk/spdk_pid1997419 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2009135 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2030084 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2030972 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2031835 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2034691 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2038380 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2039111 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2039773 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2040432 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2040699 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2043183 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2043185 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2045712 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2046101 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2046536 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2047022 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2047149 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2049653 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2049985 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2052386 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2054248 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2058380 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2058382 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2070412 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2070693 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2074843 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2075162 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2076392 00:20:39.506 Removing: /var/run/dpdk/spdk_pid2077836 00:20:39.506 Clean 00:20:39.765 17:37:06 -- common/autotest_common.sh@1437 -- # return 0 00:20:39.765 17:37:06 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:20:39.765 17:37:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:39.765 17:37:06 -- common/autotest_common.sh@10 -- # set +x 00:20:39.765 17:37:06 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:20:39.765 17:37:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:39.765 17:37:06 -- common/autotest_common.sh@10 -- # set +x 00:20:39.765 17:37:06 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:20:39.765 17:37:06 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:20:39.765 17:37:06 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:20:39.765 17:37:06 -- spdk/autotest.sh@389 -- # hash lcov 00:20:39.765 17:37:06 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:20:39.765 17:37:06 -- spdk/autotest.sh@391 -- # hostname 00:20:39.765 17:37:06 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:20:40.023 geninfo: WARNING: invalid characters removed from testname! 00:21:06.566 17:37:31 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:21:09.094 17:37:35 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:21:11.623 17:37:38 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:21:14.905 17:37:40 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:21:17.435 17:37:43 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:21:20.000 17:37:46 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:21:22.528 17:37:48 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:22.528 17:37:49 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:22.528 17:37:49 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:22.528 17:37:49 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.528 17:37:49 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.528 17:37:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.528 17:37:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.528 17:37:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.528 17:37:49 -- paths/export.sh@5 -- $ export PATH 00:21:22.528 17:37:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.528 17:37:49 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:21:22.528 17:37:49 -- common/autobuild_common.sh@435 -- $ date +%s 00:21:22.528 17:37:49 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713454669.XXXXXX 00:21:22.528 17:37:49 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713454669.RqQMrw 00:21:22.528 17:37:49 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:21:22.528 17:37:49 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:21:22.528 17:37:49 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:21:22.528 17:37:49 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:21:22.528 17:37:49 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:21:22.528 17:37:49 -- common/autobuild_common.sh@451 -- $ get_config_params 00:21:22.528 17:37:49 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:21:22.528 17:37:49 -- common/autotest_common.sh@10 -- $ set +x 00:21:22.528 17:37:49 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:21:22.528 17:37:49 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:21:22.528 17:37:49 -- pm/common@17 -- $ local monitor 00:21:22.528 17:37:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:22.788 17:37:49 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2088953 00:21:22.788 17:37:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:22.788 17:37:49 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2088955 00:21:22.788 17:37:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:22.788 17:37:49 -- pm/common@21 -- $ date +%s 00:21:22.788 17:37:49 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2088957 00:21:22.788 17:37:49 -- pm/common@21 -- $ date +%s 00:21:22.788 17:37:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:22.788 17:37:49 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2088961 00:21:22.788 17:37:49 -- pm/common@21 -- $ date +%s 00:21:22.788 17:37:49 -- pm/common@26 -- $ sleep 1 00:21:22.788 17:37:49 -- pm/common@21 -- $ date +%s 00:21:22.788 17:37:49 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713454669 00:21:22.788 17:37:49 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713454669 00:21:22.788 17:37:49 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713454669 00:21:22.788 17:37:49 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713454669 00:21:22.788 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713454669_collect-vmstat.pm.log 00:21:22.788 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713454669_collect-bmc-pm.bmc.pm.log 00:21:22.788 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713454669_collect-cpu-temp.pm.log 00:21:22.788 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713454669_collect-cpu-load.pm.log 00:21:23.727 17:37:50 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:21:23.727 17:37:50 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:21:23.727 17:37:50 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:21:23.727 17:37:50 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:21:23.727 17:37:50 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:21:23.727 17:37:50 -- spdk/autopackage.sh@19 -- $ timing_finish 00:21:23.727 17:37:50 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:23.727 17:37:50 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:21:23.727 17:37:50 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:21:23.727 17:37:50 -- spdk/autopackage.sh@20 -- $ exit 0 00:21:23.727 17:37:50 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:21:23.727 17:37:50 -- pm/common@30 -- $ signal_monitor_resources TERM 00:21:23.727 17:37:50 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:21:23.727 17:37:50 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:23.727 17:37:50 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:21:23.727 17:37:50 -- pm/common@45 -- $ pid=2088982 00:21:23.727 17:37:50 -- pm/common@52 -- $ sudo kill -TERM 2088982 00:21:23.727 17:37:50 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:23.727 17:37:50 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:21:23.727 17:37:50 -- pm/common@45 -- $ pid=2088981 00:21:23.727 17:37:50 -- pm/common@52 -- $ sudo kill -TERM 2088981 00:21:23.727 17:37:50 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:23.727 17:37:50 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:21:23.727 17:37:50 -- pm/common@45 -- $ pid=2088980 00:21:23.727 17:37:50 -- pm/common@52 -- $ sudo kill -TERM 2088980 00:21:23.727 17:37:50 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:23.727 17:37:50 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:21:23.727 17:37:50 -- pm/common@45 -- $ pid=2088983 00:21:23.727 17:37:50 -- pm/common@52 -- $ sudo kill -TERM 2088983 00:21:23.727 + [[ -n 1804779 ]] 00:21:23.727 + sudo kill 1804779 00:21:23.736 [Pipeline] } 00:21:23.754 [Pipeline] // stage 00:21:23.759 [Pipeline] } 00:21:23.777 [Pipeline] // timeout 00:21:23.782 [Pipeline] } 00:21:23.799 [Pipeline] // catchError 00:21:23.805 [Pipeline] } 00:21:23.821 [Pipeline] // wrap 00:21:23.827 [Pipeline] } 00:21:23.838 [Pipeline] // catchError 00:21:23.846 [Pipeline] stage 00:21:23.848 [Pipeline] { (Epilogue) 00:21:23.860 [Pipeline] catchError 00:21:23.861 [Pipeline] { 00:21:23.874 [Pipeline] echo 00:21:23.875 Cleanup processes 00:21:23.881 [Pipeline] sh 00:21:24.168 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:21:24.168 2089112 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:21:24.168 2089247 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:21:24.182 [Pipeline] sh 00:21:24.467 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:21:24.467 ++ grep -v 'sudo pgrep' 00:21:24.467 ++ awk '{print $1}' 00:21:24.467 + sudo kill -9 2089112 00:21:24.480 [Pipeline] sh 00:21:24.763 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:32.884 [Pipeline] sh 00:21:33.171 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:33.171 Artifacts sizes are good 00:21:33.188 [Pipeline] archiveArtifacts 00:21:33.195 Archiving artifacts 00:21:33.387 [Pipeline] sh 00:21:33.673 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:21:33.691 [Pipeline] cleanWs 00:21:33.700 [WS-CLEANUP] Deleting project workspace... 00:21:33.700 [WS-CLEANUP] Deferred wipeout is used... 00:21:33.707 [WS-CLEANUP] done 00:21:33.709 [Pipeline] } 00:21:33.731 [Pipeline] // catchError 00:21:33.744 [Pipeline] sh 00:21:34.025 + logger -p user.info -t JENKINS-CI 00:21:34.034 [Pipeline] } 00:21:34.052 [Pipeline] // stage 00:21:34.058 [Pipeline] } 00:21:34.076 [Pipeline] // node 00:21:34.084 [Pipeline] End of Pipeline 00:21:34.127 Finished: SUCCESS